If you turn on the news, it’s hard to distinguish between fiction and reality when it comes to AI. Fears of irresponsible AI are everywhere - from anxieties that humans could become obsolete to concerns over privacy and control. Some are even worried that today’s AI will turn into tomorrow’s real-life “Skynet” from the Terminator series.
Arnold Schwarzenegger says it best in an article for Variety Magazine, “Today, everyone is frightened of it [AI], of where this is gonna go.” Although many AI-related fears are overblown, it does raise safety, privacy, bias, and security concerns that can’t be ignored. With the rapid advance of generative AI technology, government agencies and policymakers around the world are accelerating efforts to create laws and provide guardrails to manage the potential risks of AI. Stanford University's 2023 AI Index shows 37 AI-related bills were passed into law globally in 2022.
Emerging AI Regulations in the US and Europe
The most significant developments in AI Regulation are the EU AIA Act and the new Executive Order for New Standards for AI in the US. The European Parliament, the first major regulator to make laws about AI, created these regulations to provide guidance on how AI can be used in both private and public areas. These guardrails prohibit the use of AI in vital services that could jeopardize lives or cause harm, only making an exception for healthcare, with maximum safety and efficacy checks by regulators.
In the US, as a key component of the Biden-Harris Administration's holistic approach to responsible innovation, the Executive Order sets up new standards for AI safety and security. These actions are designed to ensure that AI systems are safe, secure, and trustworthy, protect against AI-enabled fraud and deception, enhance cybersecurity, and protect Americans’ privacy.
Canada, the UK, and China are also in the process of drafting laws for governing AI applications to reduce risk, increase transparency, and ensure they respect anti-discrimination laws.
Why do we need to regulate AI?
Generative AI, in combination with conversational AI, is transforming critical workflows in financial services, employee hiring, customer service management, and healthcare administration. With a $150 billion total addressable market, generative AI software represents 22% of the global software industry as providers offer an ever-expanding suite of AI-integrated applications.
Despite the use of generative AI models having great potential in driving innovation, without the proper training and oversight, it can pose significant risks around using this technology responsibly and ethically. Isolated incidents of chatbots fabricating stories, like implicating an Australian mayor in a fake bribery scandal, or the unregulated use of AI by employees of a global electronics giant, have triggered concerns about its potential hazards.
The misuse of AI can lead to serious consequences, and the rapid pace of its advancement makes it difficult to control. This is why it's crucial to use these power tools wisely and understand their limitations. Relying too heavily on these models without the right guidance or context is extremely risky - especially in regulated fields like financial services.
With AI’s potential for misuse, the need for regulatory governance that provides greater data privacy, protections against algorithmic discrimination, and guidance on how to prioritize safe and effective AI tools is necessary. By establishing safeguards for AI, we can take advantage of its positive applications while also effectively managing its potential risks.
When looking at research from Ipsos, a global market research and public opinion firm, most people agree that, to some degree, the government should play a role in AI regulation.
What does Responsible AI look like?
A safe and responsible development of AI needs a comprehensive responsible AI framework that aligns with the continuously evolving nature of generative AI models.
These should include:
- Core Principles: transparency, inclusiveness, factual integrity, understanding limits, governance, testing rigor, and continuous monitoring to guide responsible AI development.
- Recommended Practices: this includes unbiased training data, transparency, validation guardrails, and ongoing monitoring. For model and application development.
- Governance Considerations: clear policies, risk assessments, approval workflows, transparency reports, user reporting, and dedicated roles to ensure responsible AI operation.
- Technology Capabilities: that should offer tools like testing, fine-tuning, interaction logs, regression testing, feedback collection, and control mechanisms to implement responsible AI effectively. Besides built-in features for tracing customer interactions, identifying drop-off points, and analyzing training data, checks and balances to weed out biases and toxicity and enable control for humans to train and fine-tune models will ensure transparency, fairness, and factual integrity.
How do new AI regulations pose challenges for Enterprises?
Enterprises will find it extremely challenging to meet compliance requirements and enforce regulations under the U.S. Executive Order and EU AIA Act. With strict AI regulations on the horizon, companies will need to adjust their processes and tools to adjust to new policies. Without universally accepted AI frameworks, global enterprises will also face challenges adhering to the different regulations from country to country.
Additional considerations need to be taken for AI regulations within specific industries, which can quickly add to the complexity. In healthcare, the priority is balancing patient data privacy with prompt care while, on the other hand, the financial sector’s focus is on the strict prevention of fraud and safeguarding financial information. Over in the automotive industry, it's all about making sure AI-driven self-driving cars meet certain safety standards. For e-commerce, the priority shifts towards protecting consumer data and maintaining fair competition.
With new advancements continuously emerging in AI, it becomes even more difficult to keep up with and adapt to evolving regulatory standards.
All of these challenges create a balancing act for companies utilizing AI to improve business outcomes. To navigate this path securely, businesses will need the right tools, guidelines, procedures, structures, and experienced AI solutions that can lead them with assurance.
Why should enterprises care about AI regulations?
When asked to evaluate their customer service experiences with automated assistants, 1000 consumers put accuracy, security, and trust as the top five most important criteria of a successful interaction. This means that the more transparent a company is with their AI and data use, the safer customers will feel when using their products and services. Adding in regulatory measures can cultivate a sense of trust, openness, and responsibility among users and companies.
This finding aligns with a Gartner prediction that by 2026, the organizations that implement transparency, trust, and security in their AI models will see a 50% improvement in terms of adoption, business goals, and user acceptance.
How do AI Regulations affect AI Tech Companies?
When it comes to providing a proper enterprise solution, AI tech companies must prioritize safety, security, and stability to prevent potential risks to their clients' businesses. This means developing an AI system that focuses on accuracy and reliability to ensure that their outputs are dependable and trustworthy. It is also important to maintain oversight throughout AI development to be able to explain how the AI’s decision-making process works.
To prioritize safety and ethics, platforms should incorporate diverse perspectives to minimize bias and discrimination and focus on the protection of human life, health, property, and the environment. These systems must also be secure and resilient to potential cyber threats and vulnerabilities, with limitations clearly documented.
Privacy, security, confidentiality, and intellectual property rights related to data usage should be given careful consideration. When selecting and integrating third-party vendors, ongoing oversight should be exercised. Standards should be established for continuous monitoring and evaluation of AI systems to uphold ethical, legal, and social standards and performance benchmarks. Lastly, a commitment to continuous learning and development of AI systems is essential, adapting through training, feedback loops, user education, and regular compliance auditing to stay aligned with new standards.
Source: Mckinsey - Responsible AI (RAI) Principles
How can businesses adjust to new AI regulations?
Adjusting to new emerging AI regulations is no easy feat. These rules, designed to guarantee safety, impartiality, and transparency in AI systems, require substantial changes to numerous aspects of business procedures. “As we navigate increasing complexity and the unknowns of an AI-powered future, establishing a clear ethical framework isn’t optional — it’s vital for its future,” said Riyanka Roy Choudhury, CodeX fellow at Stanford Law School’s Computational Law Center.
Below are some of the ways that businesses can begin to adjust to these new AI regulations, focusing on four key areas: security and risk, data analytics and privacy, technology, and employee engagement.
- Security and risk. By beefing up their compliance and risk teams with competent people, organizations can understand the new requirements and associated procedures in greater detail, and run better gap analysis. They need to involve security teams in product development and delivery as product safety and AI governance becomes a critical part of their offering.
- Data, analytics, and privacy. Chief data officers (CDOs), data management, and data science teams must work on effectively implementing the requirements and establishing governance that delivers compliant and responsible AI by design. Safeguarding personal data and ensuring privacy will be a significant part of AI governance and compliance.
- Technology. Because considerable portions of the standards and documentation needed for compliance are highly technical, AI experts from IT, data science, and software development teams will also have a central role in delivering AI compliance.
- Employee engagement. Teams responsible for security training alongside HR will be critical to this effort, as every employee who touches an AI-related product, service, or system must learn new principles, processes, and skills.
Source: Forrester Vision Report - Regulatory Overview: EU AI Rules and Regulations
How does Kore.ai ensure the safe and responsible development of AI?
Kore.ai places a strong emphasis on ensuring the safe and responsible development of AI through our comprehensive Responsible AI framework, which aligns with the rapidly evolving landscape of generative AI models. We believe that a comprehensive framework is needed to ensure the safe and reliable development and use of AI. This means balancing innovation with ethical considerations to maximize benefits and minimize potential risks associated with AI technologies.
Our Responsible AI framework consists of these core principles, which form the foundation of our safety strategy and touches every aspect of AI practice and delivery that enterprises need.
- Transparency: We believe AI systems, particularly conversational AI, should be transparent and explainable given its widespread impact on consumers and enterprise users. When decisions of algorithms are transparent to both business and technical people, it improves adoption. People should be able to trace how interactions are processed, identify drop-off points, analyze what data was used in training and understand if it's an AI assistant or a human that they are interacting with. Explainability of AI is critical for easy adoption in regulated industries like banking, healthcare, insurance and retail.
- Inclusiveness: Poorly trained AI systems invariably lead to undesirable tendencies; so providers need to ensure that bias, hallucination or other unhealthy behaviors are checked at its root. To ensure conversational experiences are inclusive, unbiased and free of toxicity for people of all backgrounds, we implement checks and balances while designing the solutions to weed out biases.
- Factual Integrity: Brands thrive on integrity and authenticity. AI-generated responses directed at customers, employees or partners should build credibility by meticulously representing factual enterprise data and organizational brand guidelines. To avoid hallucination and misrepresentation of facts, over-reliance on AI models trained purely on data without human supervision should be avoided. Instead, enterprises should improve models with feedback from humans through the "human-in-the-loop" (HITL) process. Using human feedback to train and fine-tune models, allows them to learn from past mistakes and makes them more authentic.
- Understanding Limits: To catch up with the evolving technology, organizations should continuously evaluate model strengths, and understand the limits of what AI can perform to determine appropriate usage.
- Governance Considerations: Controls are needed to check how models they are deploying are being used and maintain detailed records of their usage.
- Testing Rigor: To improve performance, AI models need to be thoroughly tested to uncover harmful biases, inaccuracies and gaps and continuously monitored to incorporate user feedback.
Next Steps for your Organization
Understanding all the changes surrounding Responsible AI can be overwhelming. Here are a few strategies that businesses can use to stay proactive and well-prepared for upcoming regulations while also utilizing AI in a responsible manner.
Get Educated about New Policies
It's essential for businesses to keep themselves updated and educated on the latest policies and related tech regulations. This also means conducting regular assessments of existing security standards and staying-up-to-date on amendments or steps that will be needed for future readiness.
Evaluate AI Vendors for their AI Safety Capabilities
When comparing different AI products, it is important to ensure the vendor's AI solutions are safe, secure, and trustworthy. This involves reviewing the vendor's AI policies, assessing their reputation and security, and evaluating their AI governance. A responsible vendor should have a comprehensive and transparent policy in place that addresses potential risks, privacy, safety and ethical considerations associated with AI.
Add Responsible AI to Your Executive Agenda
Responsible AI should be a top priority for organizations, with leadership playing a crucial role in its implementation. The cost of non-compliance with technology can be a high one. With risks for security breaches and significant financial penalties, potentially exceeding a billion dollars in fines, getting support from leadership is the best way to ensure resources are prioritized for responsible AI practices and regulations.
Monitor and Participate in AI Safety Discussions
Being involved with AI safety conversations sets businesses up for success with new updates, rules, and the best ways to use AI safely. This active role allows companies to discover potential issues early and come up with solutions before they become serious, lowering risks and making it easier to use AI technology.
Start Early in Your Responsible AI Journey
Getting started with Responsible AI early on allows businesses to integrate ethical considerations, navigate legal and regulations, and safety measures from the start, reducing risk. Businesses will gain a competitive advantage, as customers and partners increasingly value companies that prioritize ethical and responsible practices.
Responsible AI is a field that is continuously developing, and we're all learning collectively. Staying informed and actively seeking knowledge are crucial steps for the immediate future. If you want help with assessing your options or want to know more about using AI responsibly, our team is ready to support you. Our team of experts have created educational resources for you to rely on, and are ready to help you with a free consultation.