The AI (artificial intelligence) market is poised to grow tremendously in the next few years. A new report from Technavio suggests the market will grow by a massive $76.4 billion between 2021 and 2025 at a CAGR (compound annual growth rate) of 21%. Factors contributing to this growth include the use of AI in cybersecurity and fraud prevention in an increasingly digital age. AI’s ability to enhance employee productivity and streamline operations are additional reasons companies are adopting AI solutions, especially after the hit so many businesses took in 2020 due to the COVID-19 pandemic. But as AI adoption grows, so does the need for conversations about trust. Does the industry need more than conversations? Does it need rules?
The European Commission recently proposed new rules for AI with the goal of turning Europe into a “global hub for trustworthy artificial intelligence.” The proposal is part two of a commitment made by Ursula von der Leyen, president of the European Commission, who challenged the commission to put forward legislation for a coordinated European approach on the human and ethical implications of AI. Last year, the commission published a whitepaper on the topic, which set the stage for the newly proposed legal framework based on a large-scale consultation of numerous stakeholders. The proposal states that “AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human wellbeing.” Therefore, the framework and coordinated plan designed for EU member states to carry out is meant to guarantee the safety and fundamental rights of people and businesses, while simultaneously encouraging AI investment, innovation, and adoption.
The rules ban AI systems that pose unacceptable risk, including anything that manipulates human behavior to coerce individuals and systems that allow “social scoring” by governments. High-risk AI systems will be subject to “strict obligations” before they can be put on the market, and this category includes AI technology used for applications such as transportation, exam scoring, robot-assisted surgery, recruitment procedures, credit scoring, and law enforcement, among others. In its proposal, the commission also proposed a new Machinery Regulation to define health and safety requirements for increasingly smart machines, including robots. The regulation is meant to help entities navigate the safe integration of the AI systems into machinery.
What do the new rules mean for global AI adoption and innovation? The EU’s goal is to boost its global competitiveness by taking a strong stance on the need for human-centric, secure, inclusive, and trustworthy AI without hindering innovation. The rest of the developed world will need to take similar steps to remain globally competitive. Trust in AI isn’t a nice-to-have anymore; it’s a must-have. AI systems used in increasingly complex situations to aid or even replace human judgement require a certain level of transparency. The more important the decisions being made, the higher standards to which AI algorithms must be held. While it can be a tough balance to strike when it comes to creating rules and regulations—because on one hand, the rules protect businesses and individuals, and on the other it can hold innovation back—the industry needs to get this right. In fact, rather than regulation meant to create trust, what would really hold AI back is a lack of transparency and trust.
Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #digitaltransformation #machinelearning #cybersecurity #artificialintelligence #Europe #futureofwork