Menu

Making AI’s Arcane Neural Networks Accessible

Making AI’s Arcane Neural Networks Accessible

Data Scientists Remain in Hot Demand, but they Will Give up More of their Core Functions this Year and Beyond to Automated Tools

We’re only a few months into the new year, but already we’re seeing signs that automated machine learning modeling, sometimes known as autoML, is rising to a new plateau of sophistication.

Specifically, it appears that a promising autoML approach known as “neural architecture search” will soon become part of data scientists’ core toolkits. This refers to tools and methodologies for automating creation of optimized architectures for convolutional, recurrent, and other neural network architectures at the heart of AI’s machine learning models.

Neural architecture search tools optimize the structure, weights, and hyperparameters of a machine learning model’s algorithmic “neurons” in order to make them more accurate, speedy, and efficient in performing data-driven inferences. This technology has only recently begun to emerge from labs devoted to basic research in AI tools and techniques. The research literature shows that neural architecture search tools have already outperformed manually designed neural nets in many AI R&D projects.

Commercialization Coming to Neural Architecture Search

Within the burgeoning autoML space, neural architecture search is showing signs of early commercialization.

At CES 2020 in Las Vegas in early January, I met with Montreal-based AI startup Deeplite. Its Lightweight Intelligence tool can automatically optimize a neural network for high-performance inferencing on a range of edge-device hardware platforms. It does this without requiring manual inputs or guidance from scarce, expensive data scientists.

To see how Deeplite’s tool accomplishes this, check out this discussion of the firm’s partnership with Taiwanese company Andes Technology. The RL (reinforcement learning) engine in Deeplite’s hardware-aware neural-architecture search engine automatically found, trained and deployed large neural network models to Andes’ RISC-V hardware. It compressed MobileNet models that were trained on a Visual Wake Words dataset from 13MB down to less than 188KB, a drop of almost 99 percent, with only a 1 percent drop in neural-net inferencing accuracy.

Amazon Launches Open Source autoML Toolkit with Neural Architecture Search

Another key milestone in the maturation of neural architecture search was Amazon’s recent launch of an open source autoML toolkit with this capability built in. Released the same week as CES, Amazon’s new AutoGluon tool enables AI developers of all skill levels to automate the optimization of new or existing models for high-performance inferencing on diverse target hardware platforms.

AutoGluon automates data preparation, model development, hyperparameter tuning, and training within the devops flow of an ML model. It can optimize existing PyTorch and MXNet ML models. It can also interface with existing AI devops pipelines via APIs to automatically tweak an existing ML model and thereby improve its performance of inferencing tasks.

Amazon currently has AutoGluon running on Linux platforms but has announced plans for MacOS and Windows support. Available from this project website or GitHub, AutoGluon can automatically generate a high-performance ML model from as few as three lines of Python code. It taps into available compute resources and uses reinforcement learning algorithms to search for the best fitting neural network architecture for its target environment.

AutoGluon uses RL to speed automated neural architecture searches using computing resources efficiently. Indeed, RL—as implemented both in AutoGluon and in Deeplite’s solution—is proving to be the most fruitful approach for recent advances in this area, using agent-centric actions and rewards to search the space of optimal neural architectures based on estimates of the performance of trained architectures on unseen data. If you truly want to get into the weeds of how AutoGluon works, check out this link.

RL is an up-and-coming alternative to evolutionary algorithms, which have been central to neural architecture search since the 1990s in AI R&D environments. Evolutionary algorithms are still widely used in lab environments such as OpenAI, Uber Labs, Sentient Labs (now Evolv), DeepMind, and Google Brain.

Still Early for Neural Architecture Search in Mainstream AI Devops

As autoML data science platforms become prevalent in the enterprise world, neural architecture search tools such as these will be a standard component. However, this capability is still scarcely evident in most AI devops environments.

By the end of this year, I predict that more than half of commercial and open source AI devops workbenches will add neural architecture search as an integrated feature. As autoML gains adoption and improves, it will boost data scientists’ productivity by guiding their decisions regarding whether to build their models on established machine learning algorithms, such as linear regression and random forest algorithms, or on any of the newer, more advanced neural-network algorithms.

As the decade proceeds, neural architecture search will reduce the need for data scientists to understand the neural-net guts of their ML models. This emerging approach will democratize AI by freeing developers to evolve their skillset away from tweaking arcane algorithms and toward developing powerfully predictive intelligent apps.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

The original version of this article was first published on InfoWorld.

Image Credit: KTSimage / Getty Images

Author Information

James has held analyst and consulting positions at SiliconANGLE/Wikibon, Forrester Research, Current Analysis and the Burton Group. He is an industry veteran, having held marketing and product management positions at IBM, Exostar, and LCC. He is a widely published business technology author, has published several books on enterprise technology, and contributes regularly to InformationWeek, InfoWorld, Datanami, Dataversity, and other publications.

Related Insights
Meta Q4 FY 2025 Results Underscore AI-Fueled Ads Momentum
January 30, 2026

Meta Q4 FY 2025 Results Underscore AI-Fueled Ads Momentum

Futurum Research analyzes Meta’s Q4 FY 2025 earnings, focusing on AI-driven ads gains, stronger Reels and Threads engagement, and how 2026 infrastructure spend and messaging commerce shape enterprise AI strategy....
IBM Q4 FY 2025 Software and Z Cycle Lift Growth and FCF
January 30, 2026

IBM Q4 FY 2025: Software and Z Cycle Lift Growth and FCF

Futurum Research analyzes IBM’s Q4 FY 2025, highlighting software acceleration, the IBM Z AI cycle, and AI-driven productivity and M&A synergies supporting margin expansion and higher FY 2026 free cash...
ServiceNow Q4 FY 2025 Earnings Highlight AI Platform Momentum
January 30, 2026

ServiceNow Q4 FY 2025 Earnings Highlight AI Platform Momentum

Futurum Research analyzes ServiceNow’s Q4 FY 2025 results, highlighting AI agent monetization, platform consolidation in CRM/CPQ, and a security stack aimed at scaling agentic AI across governed workflows heading into...
Microsoft Q2 FY 2026 Cloud Surpasses $50B; Azure Up 38% CC
January 30, 2026

Microsoft Q2 FY 2026: Cloud Surpasses $50B; Azure Up 38% CC

Futurum Research analyzes Microsoft’s Q2 FY 2026 earnings, highlighting AI-led cloud demand, agent platform traction, and Copilot adoption amid record capex and a substantially expanded commercial backlog....
Will Acrobat Studio’s Update Redefine Productivity and Content Creation
January 29, 2026

Will Acrobat Studio’s Update Redefine Productivity and Content Creation?

Keith Kirkpatrick, VP and Research Director at Futurum, covers Adobe’s Acrobat Studio updates and provides his assessment of how this will impact the use of software to manage and automate...
Teradata Set to Turn Data Gravity Into AI Gold With Enterprise AgentStack
January 29, 2026

Teradata Set to Turn Data Gravity Into AI Gold With Enterprise AgentStack

Brad Shimmin, Vice President and Practice Lead at Futurum, analyzes Teradata’s launch of Enterprise AgentStack. He explores how Teradata is leveraging data gravity and robust governance to bridge the "production...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.