Shopping cart

close

EACC & Member News

EACC & Member News

Loyens & Loeff – Pillar Two: Treatment of GILTI for Pillar Two purposes – New York Office Snippet

Loyens & Loeff NY regularly posts ‘Snippets’ on a range of EU tax and legal topics. This Snippet describes the treatment of GILTI for Pillar Two (‘P2’) purposes.

https://www.loyensloeff.com/insights/news–events/news/pillar-two-treatment-of-gilti-for-pillar-two-purposes–new-york-office-snippet/

EACC & Member News

Taylor Wessing – The reform of marketing authorization procedures for medicinal products in the European Union

European pharmaceutical legislation is about to undergo fundamental reform. The planned changes relate in particular to the simplification and shortening of the approval procedure. But what exactly is changing?

https://www.taylorwessing.com/en/insights-and-events/insights/2023/06/die-reform-der-zulassungsverfahren-fuer-arzneimittel-in-der-europaeischen-union

EACC & Member News

Deloitte – The Netherlands; the first EU country with a bill for the Minimum Tax Rate Act 2024

The bill for the Minimum Tax Rate Act 2024 (legislation for implementing a global minimum tax under Pillar Two) was presented to Dutch parliament today together with the Memorandum of Understanding.

EACC & Member News

Loyens & Loeff: FDI Screening in the Netherlands

As a result of globalization and digitalization, geopolitics, investments, and (national) security have become increasingly intertwined. This new dynamic has raised concerns about undesirable foreign interference. To protect national security, the Netherlands has, as many other countries, several FDI screening mechanisms: (i) a general FDI screening and (ii) a sector specific mechanism for the Energy (electricity and gas) sector and (iii) a sector specific screening mechanism for the telecommunication sector.

In this edition of Quoted new forms of Dutch (F)DI screening are addressed:

  • Introduction
  • Why FDI screening in the Netherlands?
  • The FDI screening regulation
  • The Investments, Mergers and Acquisitions Security Screening (Vifo) Act
  • Telecommunications Sector (Undesirable Influence) Act
  • The Gas Act and the Electricity Act 1998
  • The consequences in practice (M&A)
  • Conclusion

See also our infographic for more information.

Download

EACC & Member News

Taylor Wessing – Views across Europe: the Irish DPC’s decision on Meta’s international data transfers

Without doubt, the publication on Monday of the Irish Data Protection Commission’s (DPC) decision on Meta Ireland’s (formerly Facebook) international data transfers is hugely consequential. Although the amount of the fine (1.2 billion Euros) is headline grabbing, the ruling also included an order requiring Meta to suspend future transfers and to bring its processing operations into compliance by ceasing unlawful processing of EU/EEA user data. While on the face of it Monday’s decision is only binding on Meta, the ruling has implications beyond Meta and beyond transfers of data to the US.

A number of our European team below provide their initial thoughts on the implications of the decision.

Giving the German perspective, Paul Voigt writes:

The German regulators have been a key driver in arguing for amendments to the initial decision of the Irish DPC. They have also been pushing strongly for a very high fine on Meta. According to the German authorities, it would have set a bad precedent if Meta had not been fined: this is because Meta knew about the issues relating to their data transfers to the US but chose to continue with them. German regulators  were critical of Meta’s position since,rather than taking action, Meta preferred to wait until the upcoming Data Privacy Framework (DPF) between the EU and the US (which will permit cross-border data transfers from the EU to the US in future) has been passed. Besides the fine, the German regulators also advocated for an obligation on Meta to delete data that had already been unlawfully transferred to the US. The obligation to delete data may continue to be an issue for Meta even after the Data Privacy Framework is law and an available data transfer solution for EEA-US data transfers.

This means that, while the German regulators have not themselves actively enforced the Schrems II requirements, and no relevant fines for such non-compliance have been issued by German regulators so far, the issue of cross-border data transfers remains a key priority for the German data protection regulators.

Teresa Pereyra from ECIJA writes from the Spanish perspective:

At the time of writing, the Spanish DPA (Agencia Española de Protección de Datos) has not issued any pronouncement or opinion on the DPC’s decision on Meta, nor is it expected to do so (beyond a press release or similar announcing the publication of the decision) in the near future. In fact, the Spanish DPA has so far not issued any opinion, decision or assessment of the Schrems II ruling. However, it should be noted that the Spanish DPA is the most active supervisory authority in the European Union (issuing 40.2% of the total sanctions imposed in 2022 in the European Economic Area), even though it would be unlikely to issue sanctions as significant as the one imposed on Meta. Significantly, the Spanish DPA was the first authority to sanction Facebook in Europe, although it has not assessed or sanctioned Facebook’s international data transfers in its latest resolutions.

Even though the Spanish DPA has not so far initiated regulatory action concerning the legality of international data transfers, this does not mean it will not sanction entities in this area of compliance in the future. In our experience and from observing the sanctioning practice of the Spanish DPA, we consider that, when taking its decisions, the DPA focuses especially on the due diligence practices of the entities being investigated. Our recommendation is therefore for businesses to focus on maintaining measures to demonstrate compliance with the principle of accountability under GDPR and to keep associated records.

Marc Schuler writes from France:

Within the framework of the dispute submitted to EDPB, the French DPA (Commission Nationale de l’Informatique et des Libertés) strongly objected to the draft decision issued by the Irish DPC. It advocated for an administrative fine to be imposed on Meta in addition to the suspension of data transfers, considering that such a fine should have punitive effects and act as an incentive for compliance for other controllers transferring personal data under similar conditions. The French DPA further insisted on the fact that the infringement was a particularly serious breach in view of the impact of Meta’s practices on the privacy of data subjects, exposing a massive volume of personal data to US Government surveillance programs, and also that such an infringement should be considered as deliberate.

The French DPA has demonstrated in past decisions its capacity to impose heavy fines and orders in cases of GDPR infringement. It has also issued cease and desist notifications to website publishers in relation to the use of Google Analytics to the extent it resulted in the transfer of personal data to the United States without proper safeguards. Although data transfers are not one of the announced priority topics for the French DPA in 2023, any data transfer outside the EU should be carefully assessed for compliance to remain on the safe side.

Giving the Dutch perspective, Otto Sleeking writes:

As the Irish DPC’s decision to fine Meta was based on the EDPB’s binding dispute resolution decision of 13 April 2023, it holds relevance for all EU Member States. Having said that, it is hard to imagine the Dutch DPA (Autoriteit Persoonsgegevens) handing out fines like these in the near future. For one, the Dutch DPA typically does not hand out fines quickly, and when fines are applied they are usually quite low in comparison to fines in other EU Member States (particularly France and Germany). We therefore do not expect that this ‘superfine’ will immediately trigger a different approach for The Netherlands. In addition, the Dutch DPA has not been very vocal on the subject of international data transfers since the Schrems II decision, and has not yet made a decision on the use of Google Analytics, for instance. Also a suspension of data transfers seems unlikely, although this does seem like a logical consequence of not complying with the law on data transfers.

We expect that, irrespective of the Irish DPC’s decision, the Dutch DPA will continue to assess data transfers on a case by case basis. Companies should therefore still be able to transfer data outside of the EU, as long as they follow all the necessary steps such as applying transfer risk assessments, and using appropriate safeguards in addition to the appropriate SCC’s where needed.

Victoria Hordern writes from the UK:

It is unlikely that the UK will take the same approach as the DPC in dealing with international data transfers. Since the UK is no longer a member of the EU, the DPC’s decision does not bind the UK regulator, the Information Commissioner’s Office (ICO).  We know that the ICO has yet to take regulatory enforcement action against a business for failure to comply with the rules on international data transfers. The mood music from ICO guidance and the direction of the UK Government’s reform of data protection law also suggests that the regulator will not prioritise investigations or enforcement in the area of international data transfers unless significant harm exists. In other words, technical non-compliance with the rules (absent any evident harm) is unlikely in and of itself to attract the scrutiny of the ICO.  Therefore, although UK businesses transferring data to the US should still follow the law in putting in place appropriate safeguards (like a recognised contractual transfer mechanism) and carry out transfer risk assessments, it is unlikely that any equivalent complaint made to the ICO about transfers to the US would result in such detailed scrutiny and enforcement.

So what does this mean?

This decision has been a long time coming although the final ruling was not unexpected given the EDPB’s position, the Schrems II decision from the Court of Justice of the EU and the current state of US law. This regulatory action catapults international data transfers to the top of the list for EU data protection authorities and makes compliance more of a minefield.

It’s important to remember that this ruling does not in itself ban data transfers to the US. But it does signal that there are multiple challenges for (i) a transfer of data to the US where the recipient is an electronic communications service provider under s. 702 FISA, or (ii) a data transfer to any other non-EEA and non-adequate country with equivalent law or practice.

Yet, as the survey of a number of experts from European countries above demonstrates, the DPC decision is unlikely to lead to a rash of equivalent enforcement actions breaking out across Europe. The decision confirms that European businesses transferring personal data to the US can still rely on the EU Standard Contractual Clauses providing they carry out transfer impact assessments and, where required, implement appropriate supplementary measures. In doing so, they will want to distinguish their circumstances as far as possible from Meta’s. In time, data transfers from the EU to the US should become smoother once the Data Privacy Framework is available although the DPF itself is, of course, open to challenge.

Taylor Wessing’s team of data protection experts are available to assist with any questions.

EACC & Member News

Taylor Wessing – The EU’s AI Act heads towards final negotiations

What’s the issue?

The EU’s approach to regulating AI is through top-down umbrella legislation. The European Commission proposed an AI Act in April 2021 as discussed here. The AI Act is intended to regulate the development and use of AI by providing a framework of requirements and obligations on its developers, deployers and users, together with regulatory oversight. The framework will be underpinned by a risk-categorisation for AI with ‘high-risk’ systems subject to the most stringent obligations, and a ban on ‘unacceptable-use’ systems.

Much of the subsequent debate around the draft AI Act has focused on the risk-categorisation system and definitions.

What’s the development?

The European Parliament has provisionally agreed its negotiating position (likely to be formally adopted on 14 June 2023), which follows on from the Council adopting its position in December 2022.This means trilogues to arrive at the final version of the Act are likely to begin in early summer.

The Council’s position

The Council of the European Union’s proposed changes include:

  • a narrower definition of AI systems to cover systems developed through machine learning approaches and logic, and knowledge-based approaches
  • private sector use of AI for social scoring is prohibited as are AI systems which exploit the vulnerabilities, not only for a specific group of persons, but also persons who are vulnerable due to their social or economic situation
  • clarification of when real-time biometric identification systems can be used by law enforcement
  • clarification of the requirements for high-risk AI systems and the allocation of responsibility in the supply chain
  • new provisions relating to general purpose of AI and where that is integrated into another high-risk system
  • clarification of exclusions applying to national security, defence and the military as well as where AI systems are used for the sole purpose of research and development or for non-professional purposes
  • simplification of the compliance framework
  • more proportionate penalties for non-compliance for start-ups and SMEs
  • increased emphasis on transparency, including a requirement to inform people exposed to emotion recognition systems
  • measures to support innovation.

The European Parliament’s position

MEPs have suggested a number of potentially significant amendments to the Commission’s proposal.

Unacceptable-risk AI

An amended list of banned ‘unacceptable-risk’ AI to include intrusive and discriminatory uses of AI systems such as:

  • real-time remote biometric identification systems in publicly accessible spaces
  • post remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation
  • biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation)
  • predictive policing systems (based on profiling, location or past criminal behaviour)
  • emotion recognition systems in law enforcement, border management, workplace, and educational institutions
  • indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

High-risk AI

Suggested changes would expand the scope of the high-risk areas to include harm to people’s health and safety, fundamental rights, or the environment. High-risk systems will include AI systems used to influence voters in political campaigns and in social media recommender platforms (with more than 45m users under the DSA). High-risk obligations are more prescriptive, with a new requirement to carry out a fundamental rights assessment before use. However, the European Parliament’s proposal also provides that an AI system which ostensibly falls within the high-risk category but which does not pose a significant risk can be notified to the relevant authority as being low-risk. The authority will have three months to object, during which time the AI system can be launched. Misclassifications will be subject to fines.

Enhanced measures for foundation and generative AI models

Providers of foundation model AIs would be required to guarantee protection of fundamental rights, health and safety, and the environment, democracy and rule of law. They would be subject to risk assessment and mitigation requirements, data governance provisions, and to obligations to comply with design, information and environmental requirements, as well as to register in the EU database.

Generative AI model providers would be subject to additional transparency requirements, including to disclose that content is generated by AI. Models would have to be designed to prevent them from generating illegal content and providers will need to publish summaries of copyrighted data used for training. They will also be subject to assessment by independent third parties.

Additional rights

MEPs propose additional rights for citizens to file complaints about AI systems and receive explanations of decisions reached by high-risk AI systems that significantly impact them.

See here for more on the European Parliament’s position.

What does this mean for you?

Anyone developing, deploying or using AI in the EU, placing AI systems on the EU market or putting them into service there, or whose systems produce output used in the EU, will be impacted by the AI Act and will be waiting for the outcome of the trilogues. The European Commission is hoping that the AI Act will be in force by the end of 2023, following which there will be a two-year implementation period.

Find out more

  • You can use our Digital Legislation Tracker to keep on top of incoming digital legislation, including the AI Act. There is also a page dedicated to the AI Act here.
  • For a deep-dive into the AI Act as originally proposed, see our Interface edition here.
  • For more on AI and regulatory approaches around the world, see here.
EACC & Member News

AKD – DORA-readiness of capital market participants tested by the AFM

The Dutch Authority on Financial Markets (the ‘AFM’) performed an exploratory study with trading venues and traders for own risk and account (together the ‘capital market firms’). This to investigate whether the capital market firms have a resilient ICT incident management process and if they are compliant with the upcoming Digital Operational Resilience Act (‘DORA’). The results showed some gaps between the ICT management process in place and the requirements set by DORA.

The AFM performed this study after it observed an increase in ICT-related incidents occurring in the capital markets. As part of this study, the maturity of ICT incident management was assessed. The AFM found that the investigated entities had procedures and processes in place to identify, document, and manage ICT‑related events and incidents. Furthermore they saw a strong correlation between the size of the firm and the maturity of ICT incident management.

The AFM has provided an overview of controls identified in the study (by the investigated entities) that capital market firms can implement to improve their ICT incident management, including:

  1. use of ICT event categorisation and prioritisation.
  2. incorporation of a dedicated ICT security department that implements tools to identify cyber security events and a security event response plan to counter cyber threats.
  3. periodical review of the ICT-related risk management framework to ensure compliance with regulatory requirements and keep up to speed with technology developments.
  4. root cause analyses on ICT-related incidents and define action plans to prevent the recurrence of incidents by identifying and eliminating the underlying cause.
  5. identification and use of key performance indicators (‘KPIs’) concerning ICT events and incidents to showcase to the management whether certain goals are achieved.
  6. service level agreements to manage outsourced ICT functions (if any) on the basis of which these third parties report on KPIs and provide incident reports.

In 2025, DORA will come into force. By then capital market firms will have to comply with strict(er) rules regarding ICT risk management, including ICT incident management. To ensure compliance, the AFM calls on capital market firms to start with the implementation of DORA in a timely manner.

This call for action is also relevant for other financial institutions and ICT third-party service providers, as they also must comply with DORA. To support you with the implementation of DORA, we will continue to publish blog posts, in which we will address the requirements in more detail.

We are also available to assist with a deep-dive analysis of the needs of your organisation in respect of compliance with DORA and support your implementation programme.

It’s time to start exploring and get ready for action!

Scroll To Top
We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies.