Search

How IBM’s AI and Open-Source Granite Models Revolutionize Sports Technology

How IBM's AI and Open-Source Granite Models Revolutionize Sports Technology

The News: IBM and The All England Lawn Tennis Club have launched the ‘Catch Me Up’ feature, using IBM’s Granite LLM through the watsonx platform, to provide personalized player stories and keep fans updated on all singles matches at Wimbledon. A new IBM survey reveals that 55% of global tennis fans believe AI will positively impact sports. Read the IBM Wimbledon announcement here.

How IBM’s AI and Open-Source Granite Models Revolutionize Sports Technology

Analyst Take: The summer of sports is upon us, whether it is the Olympics in Paris, Euro 24, or the Copa America in football or the regular schedule of The Masters and major tennis tournaments such as the US Open and Wimbledon, sports fans have a plethora of options for their viewing pleasure. In coordination with the various options for fans, technology in sports is rapidly transforming the way fans experience games and how athletes train. Whether it is in the stadium or remotely, technological advancements have integrated into every aspect of sports. For fans, it means real-time updates, personalized content, and unique insights that enhance their engagement. For athletes, it encompasses data-driven training methods, strategic game planning, and advanced coaching techniques. IBM has been at the forefront of this technological integration, leveraging its expertise in AI and data analytics. Their strategic collaborations with prestigious events such as The Masters in golf, the US Open, and Wimbledon in tennis demonstrate a thoughtful and sustained commitment to enhancing the sports experience. IBM’s role goes beyond mere sponsorship; it’s about creating solutions that enrich the fan experience and provide invaluable tools for athletes and coaches alike.

What Is IBM Doing in AI?

The landscape of artificial intelligence is marked by a dynamic debate between open and closed models and the axis between small and large models. IBM’s recent initiatives, particularly the introduction of its Granite code models at Think a few months back, reflect a nuanced approach to these discussions. The release of the Granite family of models to the open-source community underscores IBM’s commitment to democratizing AI innovation and making advanced tools accessible to a broader audience.

IBM’s Granite models are designed to cater to a wide range of capabilities essential for software development, such as code generation, bug fixing, explaining and documenting code, and maintaining repositories. This family of models includes variations ranging from 3 billion to 34 billion parameters, offering both base models and instruction-following model variants. These models are built on a robust foundation of IBM’s previous AI efforts, including the CodeNet dataset, which contains 500 million lines of code across over 50 programming languages. This extensive dataset has been instrumental in training models that can translate legacy code, assist in debugging, and even generate new code from simple English instructions.

The market is currently grappling with the challenge of choosing between small and large models. Larger models, while powerful, come with significant computational costs and can be unwieldy for specific tasks. Smaller models, on the other hand, may lack the breadth of capabilities but offer efficiency and targeted performance. IBM’s Granite models aim to strike a balance by providing high performance without the prohibitive costs associated with massive models. This approach is particularly relevant for enterprises looking to integrate AI into their workflows without incurring excessive costs.

One of the standout features of IBM’s Granite models is their versatility and adaptability. For example, the Granite-13b-chat-v2.1 model has demonstrated nearly comparable performance to larger models such as llama-13b-chat in key enterprise use cases such as summarization and entity extraction. Additionally, the Granite-20b-multilingual model shows impressive quality in translation tasks across multiple languages, further showcasing the models’ practical applications.

IBM’s open-source strategy is a significant departure from the more proprietary approaches of some of its competitors. By making these models available on platforms such as Hugging Face, GitHub, and RHEL AI, IBM encourages innovation and collaboration within the developer community. This openness not only fosters a more vibrant ecosystem but also ensures that the models are continually refined and improved through collective input.

Furthermore, IBM’s adherence to ethical guidelines in AI development is a critical aspect of their strategy. The Granite models are trained on data that meets stringent governance and compliance criteria, ensuring trustworthiness and reducing the risk of unintended biases. This focus on ethical AI is increasingly important as enterprises become more aware of the potential risks associated with AI adoption.

Another key area, not getting the traction it deserves is InstructLab. InstructLab is an open-source project initiated by IBM and Red Hat, designed to democratize model development and alignment with open-sourced skills and knowledge. It provides a robust platform for training and fine-tuning AI models, utilizing diverse data sources and sophisticated alignment techniques. This initiative focuses on creating versatile, instruction-following models that can be seamlessly adapted for various enterprise applications, addressing the need for customizable AI solutions in the market. By fostering a collaborative environment, InstructLab accelerates innovation in AI and makes advanced AI tools more accessible to developers globally.

InstructLab’s approach not only enhances the capabilities of AI models but also ensures they are grounded in practical, real-world applications. This project exemplifies how open-source collaboration can drive technological advancements, bridging the gap between cutting-edge research and industry needs.

Looking Ahead

The future of AI in sports and enterprise applications is set to be transformative. In sports, generative AI is expanding its reach, providing coverage and analysis that were previously unattainable. IBM’s new features for the Wimbledon digital experience, such as the ‘Catch Me Up’ feature and enhanced IBM Slamtracker, are prime examples of how AI can enrich the fan experience by offering personalized and real-time insights.

In the broader enterprise context, the demand for AI models that are both powerful and cost-effective is growing. IBM’s strategic positioning with the Granite models and the watsonx platform places it at the forefront of this evolution. By providing flexible, open, and ethically developed AI solutions, IBM is well-equipped to meet the diverse needs of modern enterprises.

Sports serve as a compelling narrative for the deployment of AI technologies. The dynamic and data-rich environment of sports provides a perfect testing ground for AI capabilities, illustrating their potential impact in real-world scenarios. As AI continues to evolve, its applications in sports will likely pave the way for broader adoption across various industries, showcasing the practical benefits of AI in enhancing efficiency, performance, and user engagement.

IBM’s thoughtful approach to AI, exemplified by its work in sports and its innovative Granite models, highlights a balanced and forward-thinking strategy that aligns with the needs of both the market and the broader community. This strategy not only advances the state of AI but also ensures that its benefits are accessible and ethically sound, fostering a future where AI can be a trusted and integral part of our lives.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Bridging the Gap Between Open Source AI & Mainstream Adoption – The Futurum Group

Highlights from Red Hat Summit 2024: Expanding on Innovation – The Futurum Group

IBM Cloud and AI Team with Wimbledon to Boost Fan Experience

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

Nivas Iyer, Sr. Principal Product Manager at Dell Technologies, joins Paul Nashawaty to discuss the transition from VMs to Kubernetes and the strategies to overcome emerging data storage challenges in modern IT infrastructures.
Shimon Ben David, CTO at WEKA, joins Dave Nicholson and Alastair Cooke to share his insights on how WEKA's innovative solutions, particularly the WEKApod Data Platform Appliance, are revolutionizing storage for AI workloads, setting a new benchmark for performance and efficiency.
The Futurum Group team assesses how the global impact of the recent CrowdStrike IT outage has underscored the critical dependency of various sectors on cybersecurity services, and how this incident highlights the vulnerabilities in digital infrastructure and emphasizes the necessity for robust cybersecurity measures and resilient deployment processes to prevent widespread disruptions in the future.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss CrowdStrike Global meltdown, Meta won't do GAI in EU or Brazil, HP Imagine AI 2024, TSMC Q2FY24 earnings, AMD Zen 5 Tech Day, Apple using YouTube to train its models, and NVIDIA announces Mistral NeMo 12B NIM.