Exploring the Complex Legal and Ethical Landscape of Artificial Intelligence
The AI for Good Global Summit 2024, taking place in Geneva, gathered influential leaders and innovators from diverse fields to explore the transformative power of artificial intelligence (AI). Among the noteworthy speakers was Danny Tobey, a partner at DLA Piper, who contributed his extensive background as a lawyer, medical doctor, and software entrepreneur to an engaging discussion on the convergence of AI, law, and ethics. Recognized by the Financial Times as the Innovative Lawyer of the Year in 2023, Tobey shared profound insights into the challenges and prospects that generative AI presents across various sectors.
“Red teaming is an area of intense focus for us,” Tobey stated, elaborating on how his firm is evaluating generative AI. In the legal domain, red teaming involves thoroughly scrutinizing AI models to pinpoint and alleviate potential risks, especially in critical fields like healthcare, education, and insurance.
“As legal professionals, we have a unique role to fulfill. We must transform abstract principles like fairness and transparency into actionable frameworks that enable companies to demonstrate their commitment to these values,” Tobey explained.
He outlined DLA Piper’s method of conducting legal red teaming by treating AI models similarly to witnesses in a deposition. The firm identifies legal risks related to AI, while examining societal norms and regulations governing acceptable behaviors in specific industries. Lawyers rigorously question AI models, just as they would a witness in a legal setting. This meticulous technique ensures that AI systems adhere to legal and ethical standards prior to deployment, which aids companies in steering clear of potential challenges.
The discussion then shifted to the business benefits of developing ethical AI solutions. Tobey concurred that ethical practices would serve as a competitive advantage. “If for no other reason, those who ignore this will be heading towards disaster,” he commented. He underscored the necessity of initial investments to ensure AI is safe, reliable, and consistent, shedding light on the crucial issue of consistency in generative AI:
“One of our major roles is to help companies establish ongoing monitoring for their AI systems,” Tobey stated.
Discussing the balance between innovation and regulatory oversight, Tobey drew from his experience as a former software entrepreneur. He recognized the iterative nature of software advancements and the pressing need for practical safeguards. Rather than aiming for unattainable perfection, he highlighted the importance of thoughtfully considering potential failures and implementing safeguards while still fostering innovation.
A significant concern in AI development is the risk of bias and inequality. Tobey emphasized the necessity of clearly defining terms such as fairness, bias, and accessibility. He observed that AI governance has evolved, transitioning from ethical AI to responsible AI, and now into legal AI.
“While we may disagree on the philosophical interpretation of fairness, we are clear on the legal definitions surrounding discrimination, bias, and emotional harm,” he elucidated.
This legal framework offers a pragmatic approach for organizations to validate that their AI systems align with societal standards.
Tobey also pointed out the immense opportunities AI offers, particularly in enhancing access to justice. He acknowledged that numerous individuals worldwide remain without adequate access to legal representation or the judicial system, often due to lengthy, costly, and cumbersome processes.
“I believe AI is a remarkable tool for expanding access to legal information,” he asserted.
To support this vision, DLA Piper has launched the AI Law and Justice Institute, a non-profit initiative working under the AI for Good umbrella. This institute aims to unite experts in crafting responsible, consistent, and affordable legal frameworks, with the inaugural symposium planned at Stanford in the fall.
When asked about the most exciting advancements in AI, Tobey highlighted the potential of generative AI as a tool for communication. He observed that currently, generative AI is predominantly functioning as a Q&A mechanism, but he imagines a future where it acts as a translator that facilitates seamless natural language interaction with various AI and technology systems. This capability aims to promote more intuitive interactions across technologies, democratizing access to AI’s capabilities. However, it heightens the need for increased focus on safety and ethical implications.
In response to queries about AI’s potential to replace lawyers, Tobey shared a well-known adage:
“AI will not replace lawyers; rather, it’s lawyers who utilize AI that will supersede those who do not.”
He believes that while AI may handle routine tasks, the essential human aspects of legal practice—communication, consensus-building, and negotiation—cannot be replicated. As per Tobey, there exists an intrinsic humanity in legal interactions and negotiations, which guarantees that lawyers will remain indispensable.
Danny Tobey’s perspectives at the AI for Good Global Summit 2024 throw light on the pivotal role legal professionals have in the ethical deployment of AI. His vision for blending rigorous legal standards with state-of-the-art technology offers a comprehensive guide for navigating the intricate landscape of AI ethics and regulation. As AI continues to advance, Tobey’s method provides a balanced viewpoint on harnessing its capabilities while safeguarding societal values.
Watch the full interview here.