The EU Regulation on Artificial Intelligence (AI Act), came into effect in August 2024 and will gradually become applicable in accordance with a timeline that started in February 2025. The AI ACT has introduced a binding legal framework for the development and use of AI systems and models in the European Union, with the aim of ensuring protection of fundamental rights and compliance with safety and transparency standards. Our firm can provide qualified assistance in all phases of the process for complying with the AI Act, offering advice on risk assessments, technical and documentary compliance, and personal data protection. With our mix of technical and legal expertise, we offer a solid and strategic path to ensure compliance with the AI ACT and protection of your business.

Unacceptable/high/limited/low risk. Our team will assist you in classifying your AI systems according to the risk categories defined by the AI Act, conducting thorough risk assessments, and implementing measures to ensure full compliance with regulatory requirements.

Compliance assessment is a critical process for ensuring that AI systems comply with the requirements established by the AI Act regarding safety, transparency, and the protection of fundamental rights. Our team will help you understand the requirements for your risk category and assess the quality of the data, algorithms, and generated results, summarizing the findings and appropriate corrective actions in a final report.

Failure to comply with the AI Act’s requirements requires system adaptation measures. Our firm is available to provide legal and technical assistance throughout the process of analyzing areas of noncompliance and designing and implementing the necessary corrective actions to meet the new standards, protecting both your business and the interests of your users.

Monitoring and reporting on AI systems are essential to ensure compliance with the human-centric approach, a key principle guiding the regulation. Our firm will support you in monitoring the impact of automated decisions, mitigating potential risks, and implementing a reporting system that respects the human-centric approach and the “human-in-the-loop” principle, ensuring that your AI systems operate ethically and compliant with AI law.

Codes of conduct are essential tools for raising awareness among companies and employees about the importance of adopting ethical and legal practices in the implementation and use of AI systems. Our firm offers code of conduct drafting services that promote the responsible use of artificial intelligence, in compliance with the rules established by the AI Act, taking into account the principles of transparency, accountability, and respect for fundamental rights, as well as specific business needs.

Our team is available to develop a training program, mandatory starting February 2025. The customized program will meet your needs and challenges, with tailored sessions within your company focused on analyzing the AI Act and its impacts. These personalized meetings will offer a detailed analysis of the regulatory provisions and their specific implications for your business, providing practical tools to ensure compliance and effective management of AI systems.