AI Boom Increases Number of U.S. Millionaires

Predicting AGI: Bold Claims and Skepticism in AI Development

by Faruk Imamovic
AI Boom Increases Number of U.S. Millionaires
© Getty Images/Tasos Katopodis

Apple's Worldwide Developers Conference (WWDC) this week brought several announcements, but one that has been surprisingly long-awaited: the introduction of a calculator app for the iPad. While this might seem like a trivial update, it marks a significant moment for iPad users who have been waiting for this feature for over a decade.

The new calculator app isn't just about basic arithmetic; it represents Apple's dedication to listening to its users and enhancing the functionality of its devices in even the smallest ways. This addition is part of a broader strategy to refine user experience and solidify the iPad's role as a versatile tool for both personal and professional use.

Addressing AI Challenges with a Cautious Approach

Apple’s AI strategy also took center stage at WWDC. Unlike other tech giants such as Adobe, Microsoft, and Google, which have faced significant issues with their AI products—ranging from privacy concerns to over-promising on capabilities—Apple is taking a more cautious and thoughtful approach.

The company is utilizing compressed, on-device, open-source derived models that can be fine-tuned for specific tasks like summarization, proofreading, and auto-replies. These models can hot swap adapters as needed, allowing for seamless functionality while maintaining privacy. This means that most AI tasks can be completed directly on the device, ensuring user data remains private and secure.

For more complex queries, Apple uses anonymized, encrypted data sent to a medium-sized model on its servers, which does not store the data. The most challenging tasks, involving advanced writing or synthetic reasoning, are sent to ChatGPT with user permission, ensuring that OpenAI also cannot store this data. This multi-tiered approach aims to balance efficiency, privacy, and functionality effectively.

AI Boom Increases Number of U.S. Millionaires
AI Boom Increases Number of U.S. Millionaires© Getty Images/Justin Sullivan

The Privacy-First Strategy: A September Reveal

Apple’s presentations highlighted the robust privacy and functionality aspects of their new AI features. However, the real test will come in September when these features are officially released. Apple has a history of transforming existing technologies into market-leading products, as evidenced by the iPhone. The tech world is watching closely to see if Apple’s measured approach to AI will yield similar success.

Google's AI Woes: A Lesson in Overpromising

While Apple is cautious, Google's AI initiatives have faced scrutiny for their frequent inaccuracies. Notably, Google's AI has been criticized for providing bizarre and incorrect answers, such as suggesting that glue is an ingredient for pizza. Despite updates and fixes, these issues persist, with AI models still producing incorrect information and even citing previous errors as sources.

This has highlighted a critical challenge in AI development: the need for accurate data and reliable performance. Google's ongoing struggles underscore the importance of rigorous testing and validation in AI systems, something Apple seems keen to address with its more conservative rollout.

AI Boom Increases Number of U.S. Millionaires
AI Boom Increases Number of U.S. Millionaires© Getty Images/Michael M. Santiago

Predicting AGI: Bold Claims and Skepticism

The conversation around Artificial General Intelligence (AGI) has also been heating up. Leopold Aschenbrenner, a former OpenAI researcher, has sparked debate with his prediction that AGI could be achieved by 2027. He argues that linear progress in AI capabilities supports this timeline, suggesting that AI models could soon match the abilities of human researchers and engineers.

However, this prediction is met with skepticism. Critics point out potential technological ceilings and emphasize the importance of continued innovation and caution. James Bekter, another OpenAI researcher, shares a similar timeline but highlights the concurrent advancements needed in system thinking and embodiment to achieve AGI.

On the other side of the debate, French scientist Francois Chollet has launched a $1 million prize to challenge AI systems to pass the Abstraction and Reasoning Corpus (ARC) test, designed to assess their ability to adapt to novel ideas and situations. Chollet argues that current benchmarks mainly test memorization rather than true reasoning, posing a significant hurdle for AGI development.

AI and Reasoning: The Stupidity of LLMs

Recent research supports the notion that current large language models (LLMs) struggle with novel reasoning tasks. Despite their impressive capabilities in areas like passing bar exams, these models often fail basic common-sense reasoning tests. For instance, when asked a simple question about the number of sisters Alice’s brother has, many LLMs provided incorrect answers with high confidence.

This highlights a fundamental weakness in current AI models: their inability to generalize knowledge in a way that mimics human reasoning. This flaw calls into question the reliability of LLMs in real-world applications, where novel and complex problem-solving is often required.

Election Misinformation: A Persistent Problem

A recent study by GroundTruthAI revealed that AI models like Google Gemini 1.0 Pro and various versions of ChatGPT often provide incorrect information about elections. This misinformation poses significant risks, particularly as these platforms are increasingly relied upon for information.

The study found that these models frequently gave wrong answers about voter registration and other election-related queries. This problem is exacerbated by the models' confidence in their incorrect responses, highlighting the need for better accuracy and validation in AI systems, especially in critical areas like elections.

ChatGPT© Getty Images/Leon Neal

The Future of AI Training Data: A Looming Bottleneck

A peer-reviewed study from Epoch AI has raised concerns about the future availability of AI training data. The study estimates that the supply of publicly available text-based data could be exhausted by 2026 to 2032, presenting a significant challenge for scaling up AI models.

To address this, companies may turn to video, audio, and synthetic data, as well as potentially leveraging private data. However, this shift raises ethical and privacy concerns, as highlighted by professors Angela Huyue Zhang and S. Alex Yang. They warn that models like GPT-4o might exploit crowdsourced data, underscoring the need for transparent and ethical AI development practices.

AI in Cybersecurity: Strengths and Limitations

In cybersecurity, AI has shown both promise and limitations. Recent research demonstrated that GPT-4 could effectively identify and exploit known security vulnerabilities but struggled with unknown, or zero-day, vulnerabilities. This finding underscores the importance of continuous innovation and adaptation in AI to stay ahead of emerging threats.

A subsequent study showed that a GPT-4 planning agent, leading a team of subagents, was able to exploit 53% of zero-day vulnerabilities in test websites. This success illustrates AI's potential in cybersecurity, but also highlights the ongoing need for human oversight and ingenuity to address the evolving landscape of digital threats.

AI's Financial Impact: A New Generation of Millionaires

The AI boom has also had significant financial implications. Consulting firm Capgemini reported that the number of millionaires in the U.S. increased by 500,000 last year, reaching a total of 7.4 million. This surge is attributed to investor optimism in AI technologies, with companies like Tesla, Meta, and Nvidia seeing substantial gains.

This financial growth reflects the broader economic impact of AI and the increasing importance of technological innovation in driving market success. As AI continues to evolve, its influence on the economy is likely to expand, creating new opportunities and challenges for investors and businesses alike.

Industry Backlash: Adjusting AI Features and Policies

Finally, companies like Adobe and Microsoft have faced backlash over their AI policies and features. Adobe recently revised its terms of service after users raised concerns about the company potentially training AI on their content. Microsoft also adjusted its Recall feature following privacy concerns, emphasizing the need for responsible AI practices.

These changes underscore the importance of balancing innovation with user trust and privacy. As AI technologies become more integrated into everyday life, companies must navigate the complexities of user expectations and ethical considerations.