What Happened This Month: June was marked by a series of technology, product, and market news that reflected the still-nascent world of AI. This news revolved around AI model interpretation, new capabilities, partnerships, transparency, and the development of new AI platforms. It also featured the discontinuation of an AI-powered order-taking system due to inaccuracy and news of a scuttled partnership between two major AI-focused organizations over data privacy and security issues.
Technology Developments
What? OpenAI has developed new methods to improve the interpretability of the GPT-4 model by identifying patterns within the model’s neural activations. These methodologies enable the identification of 16 million features in GPT-4, showing improved scalability and interpretability. These features are designed to help drive greater understanding of how the model processes different concepts. OpenAI has released code and visualization tools to allow researchers to explore these features aimed at monitoring and steering AI model behavior.
- Why? Neural networks are complex and not easily understood, making predicting behavior and outcomes challenging. Furthermore, neural networks’ operations are opaque, necessitating methods to decode their functions to enhance AI safety and transparency with models.
- What It Means: Improving the interpretability of AI models is crucial for ensuring their safety and reliability. Researchers can better understand and mitigate risks by making AI systems more transparent, fostering greater trust, and controlling over AI technologies. Read the original paper at this link.
What? Apple introduced Apple Intelligence, its generative AI platform, which will be integrated across its portfolio of iPhones, iPads, and Macs and the software upon which the devices run. The platform’s features will include enhanced Siri capabilities, and they will be available on the iPhone 15 Pro and devices with M1 chips and newer. Generative AI continues to be a key area of investment, with the market projected to reach $30.7 billion in annual revenue focused on text analysis, generation, and summarization software by 2029, according to Futurum Intelligence, reflecting a 2024-2029 compound annual growth rate (CAGR) of 11.2%.
- Why? AI is going to be a key requested feature in consumer devices, as evidenced by the interest and use of AI technology in Microsoft, Intel, and other consumer device manufacturers. Apple did not want to be left behind or beholden to other companies’ platforms.
- What It Means: The AI features are anticipated to significantly enhance user experience and drive demand for the latest Apple devices. They could also be instrumental in boosting sales as consumers seek to gain access to the latest versions of devices that include AI-enhanced capabilities. See Apple’s site, which describes the functions expected to be released within their devices.
What? OpenAI and Google DeepMind current and former employees signed an open letter titled “A Right to Warn about Advanced Artificial Intelligence.” The letter, signed by 13 current and former employees, details specific concerns over the lack of safety oversight and governance in the AI industry and calls for better protection for whistleblowers.
- Why? The robust nature of AI, along with an environment that discourages dissent, has led to employees being unable to effect meaningful change from within, so some have taken the approach of publicizing what they perceive to be a lack of safety and governance around AI to raise awareness.
- What It Means: Calling for improved safety oversight and whistleblower protections in the AI industry is crucial as AI technology rapidly advances to safeguard users from unwanted exposure to AI models and protect enterprises from future lawsuits. Ensuring transparency and accountability can help mitigate risks associated with AI deployment and ensure that the development of AI technologies aligns with public interest and safety. Read the letter here.
New Products or Services
What? Stability.ai announced Stable Audio Open, an open-source model designed for generating short audio samples and sound effects from text prompts.
- Why? Much of the focus on using generative AI to create assets has been on images, and this new model extends the capabilities to another medium.
- What It Means: Stable Audio Open, which uses an open model and enables customization of the model using the user’s own audio data, democratizes access to advanced audio generation tools. The model is trained on Freesound and Free Music Archive data, which respects creator rights. It is designed to produce short audio samples of up to 47 seconds in duration, ideal for creating riffs, samples, and loops, and as such, cannot be used to create complete songs. This should help constrain the technology from being used to copy fully copyrighted works. See the announcement at Stability.ai’s web site.
What? Apple partnered with OpenAI to integrate the ChatGPT chatbot into the iPhone’s operating system.
- Why? Apple seeks to enhance its AI capabilities in the short term, as it develops its own Apple Intelligence platform. OpenAI, meanwhile, gets access to millions of Apple users.
- What It Means: The deal should assure customers, partners, and investors that Apple is taking its AI competitive deficiencies seriously. It should also be an effective stopgap until Apple’s internal AI platform is ready for prime time. Moreover, the integration of OpenAI’s chatbot will be an opt-in service; privacy concerns and user control over their AI interactions will be paramount.
Market Developments
What? Fast-food giant McDonald’s announced it is discontinuing its AI-driven automated ordering system, developed by IBM, and in testing and trials since 2019.
- Why? The ordering system failed to meet customer expectations regarding order accuracy. The system could not understand customer orders and would take actions that did not make sense, resulting in viral videos from customers ridiculing the system. McDonald’s says the technology will be removed by the end of July but sees opportunities for AI technology in the future.
- What It Means: AI has been touted as a technology enabling companies to augment or replace workers handling repetitive or simple tasks. However, examples of AI technology being unable to understand customers’ words (and, more importantly, semantic intent) tend to make headlines and are irrefutable proof that additional work needs to be done around model training and implementing guardrails to improve accuracy and trust. See the BBC News story highlighting the challenges faced by McDonald’s. Notably, Futurum Intelligence still projects that annual spending on conversational AI will reach $21.4 billion by 2029, up from $12 billion in 2024, reflecting a CAGR of 12.3%.
What? Apple announced it would not integrate Meta’s AI models into its products due to data privacy concerns, and reportedly abandoned a partnership with the company, according to a Bloomberg news report.
- Why? Apple said it prioritizes user data security over potential advancements in AI capabilities.
- What It Means: The decision highlights the constant tension within the industry around balancing AI functionality versus privacy and security issues. Typically, AI models improve as more granular and specific data, including personal user data, are incorporated into the models. However, that additional functionality and accuracy are in direct competition with users’ desire to limit the type and amount of personal data captured and used to train AI models. This could set a precedent around setting up clear data usage and privacy guidelines, particularly with companies seeking to incorporate massive amounts of user-generated data. See Bloomberg’s news story for more information.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
The Copilot+ PC Disruption Is Here: What Happens Now?
Elon Musk’s xAI: $6 Billion Funding Boost in the AI Arms Race Against ChatGPT
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.