Skip to Content
Henkel Adhesive Technologies

Henkel Adhesive Technologies

Fighting misinformation in the age of compute frugality

How Aditya Gautam is reengineering AI-powered misinformation detection, and what it means for the future of compute.

Aditya Gautam
Machine-learning Tech Lead at Meta

10 min.

The lure of impact  

Aditya’s lifelong interest in the field sparked when AlexNet achieved a landmark victory in the ImageNet Large Scale Visual Recognition Challenge in 2012. It’s the moment that’s widely regarded as the birth of deep learning, and after that there was no looking back.

“That was the tipping point in the AI industry with the advent of deep learning, when neural networks were no longer just a research toy; they were deeply integrated into different products.”

He watched with the same awe five years later when the 2017 paper ‘Attention Is All You Need’ provided the blueprint for today's LLMs. For Aditya, watching from the sidelines was no longer enough. Inspired, he joined Google to be at the heart of creating value from machine learning. The next seismic shift came with the 2022 launch of ChatGPT, which placed the power of conversational AI directly into the hands of humanity. 

This relentless pattern of innovation is what keeps him engaged. “This whole industry is pretty exciting. It’s moving at a very fast pace with huge impact,” he says, noting that with the ""plummeting token costs and model advancement, LLMs are becoming commoditized. Yet for Aditya, the true lure wasn't just the technology itself; it was the chance to apply his learnings on real problems. ""I was very interested in applying AI agents in the real world,"" he notes. With its immediate and high societal stakes, the complex challenge of detecting misinformation felt like the ultimate stress test.

Taking the pulse of an industry in overdrive

Generative AI’s recent growth spurt has impressed even its practitioners. “The whole space is dramatically and exponentially changing from application all the way down to the infrastructure layers… we have not seen anything like this in our lifetime,” Aditya says.

But this rapid evolution is not without its pitfalls. While the tooling has matured, thanks to better guardrails, fine-tuned models, and enriched data pipelines, there remains a risk that hype outpaces reality.

“Without a clear understanding of the use-case, the ROI, and a robust evaluation pipeline, deploying AI agents can result in disappointing real-world performance,” Aditya cautions.

That pragmatism colours his recent publication at the International AAAI Conference on Web and Social Media, titled ‘A MultiAgent System for Misinformation Lifecycle: Detection, Correction and Source Identification’ where he introduces a multi-agent framework for managing the full lifecycle of misinformation, from detection and diagnosis to automated correction and source verification.

“I explored LLM powered multi-agent systems for tackling the entire misinformation lifecycle – from detecting and tracing its roots to explaining and automatically correcting it and then revising it with authentic sources to increase robustness. This level of automated correction simply wasn’t feasible before the emergence of LLMs. Now, we’re seeing real-world, practical applications of agent-based systems powered by LLMs, making what was once impossible both achievable and impactful.”

It’s an architecture that reads like a relay race, from one agent to the next – each adding its own value into the misinformation management lifecycle.

Inside the five agent blueprint

“Before the LLM era, it was possible to detect misinformation with some confidence, but to correct it automatically was not possible.” Aditya says, “Now, it is.”

01

Indexer

Crawls vetted sources like newsrooms, government portals, and peer-reviewed repositories, converting trusted information into searchable embeddings.

02

Classifier

Inspects new content and labels it for potential issues like satire, cherry-picked stats, deepfake video, or propaganda.

03

Extractor

Traces the spread of a narrative back to its origin point and ranks the credibility of all associated evidence. 

04

Corrector

Drafts a factual correction, grounding each sentence in the top-ranked sources provided by the Indexer and Extractor. 
 

05

Verifier

Performs a final check on the correction’s logic, source credibility, and adherence to safety policies before releasing it, or escalating to a human expert. 

The practicality of this blueprint hinges on a crucial design choice: using smaller, specialized LLMs. “We can’t call 100-billion-plus parameter giants five times in single workflows; the compute and cost would explode,” he explains. Instead, an agent like the Classifier can be powered by a smaller model fine-tuned for the singular task of classification.

He adds that while hallucinations are a known issue, with properly supplemental knowledge through Retrieval-Augmented Generation and other grounding means, the risk can be effectively managed.

Beyond the GPU: AI's new performance hurdles

“The demand for inference computing is going to skyrocket now that almost all industry verticals from workflow to knowledge worker assistant to information retrieval will be eventually powered by LLM based AI Agents” Aditya mentions.   

The bottlenecks are no longer just about GPU power during training. They now span the entire infrastructure stack, and for Aditya, there are some critical questions that need defined answers:

Building intelligent, responsive agents requires more than just powerful models; it demands an impeccably optimized and coordinated system. Under the hood, however, three fundamental constraints fight back.

01

First, data:

“If the data and labels goes wrong, everything’s going to be wrong,” he warns,emphasizing that data quality is the bedrock of any reliable AI system.

02

Second, memory:

even GPUs with 80 GB of VRAM can choke on dense embeddings and Multimodal content, requiring thoughtful batching and memory reuse strategies.

03

Third, energy:

language models drink megawatts, and the exponential growth in their use across all sectors means their energy requirements will continue to be a fundamental challenge.

The frugal AI toolkit

Aditya’s antidote to the immense cost of AI is to treat computational efficiency as a first principle. In the LLM world, where compute is no longer an afterthought, his strategy is not about simply cutting costs, but about a sophisticated allocation of resources. The key is knowing when to spend for a crucial advantage and when to guard it like gold, ensuring world-class results are delivered without unsustainable expense. 

This philosophy translates into a full-stack approach, applying relentless optimization at every stage of a model's life: during its initial, marathon-like training; in its daily life of high-speed inference; and deep down in its physical environment and the hardware itself. The following are just a few examples of numerous optimization techniques employed.  

During training, this means distributing a colossal model across thousands of GPUs through parallelism in all aspects i.e data, model, pipeline, experts, context and using clever techniques like Parameter-Efficient Fine-Tuning (PEFT) to update only a small fraction of its parameters. This is complemented by architectural optimizations like Flash Attention to speed up core computations and advanced memory management with the Zero Redundancy Optimizer (ZeRO) to eliminate waste.  

For inference, the model is put on a “strict diet.” Its size is shrunk through knowledge distillation, where a nimble “student” model learns from a massive “teacher,” and quantization, which simplifies the model’s mathematical language into a faster, more compact format. Its memory is then managed with key-value caching so it doesn’t have to re-read an entire conversation for every new word.  

Finally, at the hardware level, the AI is meticulously tailored to the silicon, optimizing data pathways to cut down on wasteful round trips. 

These trade-offs are no longer optional. As Aditya notes, “A few percent improvement in inference efficiency can save thousands of machines and millions of dollars in cost in a large-scale system.” It’s this surgical approach to efficiency that makes modern AI both powerful and practical.

Where the road bends next

Aditya expects a not so distant world in which “every knowledge worker will have an agent tuned to their workflow,” from radiologists to risk analysts.  

Multimodal models will catch up to text, letting a doctor converse with a chest Xray using natural language to surface insights, or a novelist storyboard a scene with generated video. Interface, he predicts, will melt away. The ""middle layer of clicks"" such as menus, dropdowns, and buttons will be replaced by conversational agents. Very optimized and personalized agents working behind the scenes would be interjected into our lives. Whether you're booking a ride, filing an insurance claim, or adjusting your investment portfolio, soon it will all be possible through a few intuitive exchanges in natural language. No toggles, no forms – just conversation. 

All of it will ride on the lessons Aditya is teaching today: keep models lean, data clean, optimize cost and evaluate ruthlessly.

Author

Aditya Gautam
Machine-learning Tech Lead at Meta  
View on LinkedIn

Aditya Gautam is a seasoned Artificial Intelligence expert specializing in large language models (LLMs) and AI agents, blending pioneering research with high-impact industrial applications. At Meta, he has led key initiatives using LLMs to tackle misinformation and explore user interests on platforms like Facebook Reels. His research extends this practical work, including papers on using multi-agent systems to combat the full lifecycle of misinformation. An active voice in the Generative AI community, he frequently speaks at major conferences on critical topics such as agentic system evaluation, LLM cost optimization, and their use in recommendation systems, while also contributing as a reviewer for top-tier AI conferences like NeurIPS and ICML.

Interviews

  • How collaboration helps solve thermal challenges

    Collaboration between engineers and customers is essential in addressing unique thermal challenges in data centers, enabling tailored solutions and advancements in cooling technologies.

    3 min.

  • Cooling the AI heatwave in data centers

    How balancing innovation, sustainability, and practical solutions can bring about thermal efficiency.

    4 min.

  • Sphere illustration with blue and green fragments

    Notes from the bleeding edge of data and telecom

    From AI-driven operational improvements to groundbreaking debonding technology, learn how innovation and collaboration are accelerating advancements in the data and telecom industry.

    4 min.

Feature in our next edition

Do you want to feature in the next edition of Uniquely Wired? Our editors are always looking for new stories. Is yours exciting?

Uniquely wired magazines