AI in 2024: Business Expectations vs. Reality and Regulatory Responses



by FARUK IMAMOVIC

AI in 2024: Business Expectations vs. Reality and Regulatory Responses
© Getty Images/Chip Somodevilla

Artificial intelligence (AI) stands out as a beacon of progress and potential. Businesses across various sectors have eagerly integrated generative AI tools into their operations, hoping to harness this potential for increased efficiency and profitability.

However, the past year has painted a complex picture. Despite the enthusiasm, many of these businesses have not witnessed the returns they anticipated. This discrepancy has led to a growing skepticism about the practical efficacy of AI in business settings, challenging the once unshakeable faith in its capabilities.

The Reality of AI in Business

Arijit Sengupta, CEO of AI app developer Aible and a notable figure in Harvard Business School's AI curriculum, has voiced a critical perspective on this unfolding scenario. He articulates a sentiment prevailing in 2024 - businesses are demanding tangible results from their AI investments.

Sengupta points out a significant issue: while CFOs are increasingly impatient for AI to demonstrate its value, the actual return on investment (ROI) for many remains elusive. This observation is backed by an IBM study, which found that the average ROI for Gen AI projects last year was a mere 5.9%, starkly underwhelming compared to the typical 10% cost of capital.

Sengupta believes that the core of the problem lies in a crucial gap between the theoretical allure of AI and its practical application in the business sphere. He criticizes the current AI bubble, fueled by aggressive marketing that inflates user expectations far beyond what the technology can deliver immediately.

The solution, according to him, is a rapid implementation strategy. Sengupta argues against the pursuit of 'perfect data' before AI deployment, advocating instead for a more dynamic approach where technology is quickly put into users' hands, and improvements are made iteratively based on feedback.

Government's Stance and Actions on AI

Amid the evolving dynamics of AI in business, the Biden Administration has taken proactive steps, showcasing a commitment to responsibly harnessing AI's potential while mitigating its risks.

Following an executive order, the Administration has rolled out a series of actions that reflect a strategic, multifaceted approach to AI governance. Bruce Reed, Deputy Chief of Staff, along with the White House AI Council, has been pivotal in driving these actions.

This council, comprising officials from diverse federal departments and agencies, is a testament to the Administration’s recognition of AI’s broad impact across various sectors. The Defense Production Act has been a crucial tool in this strategy.

By invoking this act, AI developers are now required to report essential information, including the results of AI safety tests, to the Department of Commerce. This requirement is a significant step towards ensuring transparency and accountability in AI development, particularly in areas that could impact public safety and security.

Moreover, the draft rule proposed for cloud computing companies addresses a critical aspect of AI – the computational power that fuels it. This rule requires companies to report if they are channeling computational resources to foreign entities for AI training, a move that highlights the national security implications of AI technology.

Risk assessments conducted by nine federal agencies cover the entirety of the critical infrastructure sector, showcasing the Administration’s comprehensive approach to understanding and mitigating AI risks. These assessments are not just a one-time effort; they establish a precedent for ongoing evaluations, ensuring that AI developments are consistently monitored and managed.

Jamie Nafziger's insights on these measures emphasize their uniqueness and the forward-thinking approach of the U.S. in dealing with AI. Efforts to recruit and train AI talent, such as the partnership between the National Science Foundation and Nvidia, and the AI Talent Surge initiative, are indicative of a broader strategy to not only regulate but also foster AI advancement.

President Biden Delivers Remarks On His Administrations Efforts To Safeguard The Development Of Artificial Intelligence© Getty Images/Chip Somodevilla

Regulatory Bodies and AI: CFTC's Initiative

Another significant development in the realm of AI regulation comes from the United States Commodity Futures Trading Commission (CFTC).

The agency has expressed a keen interest in understanding how AI can be used in various aspects of the derivatives markets, such as trading, risk management, compliance, and cybersecurity. By issuing a request for comments, the CFTC aims to gain insights into the current and potential applications of AI, which may influence future regulatory actions.

Rostin Behnam, Chair of the CFTC, underscores the importance of this initiative, aligning it with the broader goals set by the Biden Administration for the development of AI. Commissioner Kristin Johnson highlights the need for a comprehensive understanding of AI's integration in market dynamics.

This initiative is a part of a broader effort to update investor protection measures in line with technological advancements, as emphasized by Commissioner Christy Goldsmith Romero. The landscape of AI in business and its regulation is undergoing significant transformation.

While businesses grapple with the challenges of integrating AI effectively into their operations, governmental and regulatory bodies are stepping up to ensure that the development and use of AI are safe, secure, and beneficial.

The journey of AI from a theoretical marvel to a practical tool in business and regulation is complex and fraught with challenges. However, it is a journey that holds immense promise for the future, shaping how businesses operate and how governments and regulatory bodies adapt to a rapidly changing technological world.