During pilots, you have all the resources—engineers, servers, hardware—and you'll make it work because you're putting your utmost attention on it. But when you try to replicate that across 100, 200, or 500 locations, it falls on its face.
Overcoming the pilot-to-production cost barrier with Edge AI

Q&A with Jags Kandasamy, CEO | Latent AI
Tell us about yourself and your role with Latent AI.
I'm an entrepreneur who has spent decades working in the field of embedded systems and computing. Before launching Latent AI, I was the Chief Product Officer at OtoSense, which was successfully acquired. When I was ready to move forward with my next venture, I connected with SRI International through their Entrepreneur in Residence program.
At SRI, I met my technical co-founder, Sek Chai, who had been developing advanced AI algorithms and low-power edge computing solutions for more than five years, funded by DARPA and the Department of Defense. SRI had already established the fundamental scientific principles for the technology with proven benchmarks and prototypes—we just needed the proper business case.
As CEO and co-founder of Latent AI, I lead our mission to provide tools that integrate with existing workflows, helping AI engineers deliver neural network models that are optimized and compressed for efficient execution on any chosen platform and edge device. We're essentially the "pickaxe suppliers" in the AI gold rush—while everyone is trying to secure land and mine for themselves, we provide the tools to make that mining process more efficient.
In addition to running the company, I also collaborate with policy organizations, such as the Atlantic Council, on strategic frameworks for edge computing in defense applications. This work helps bridge the gap between technical innovation and practical implementation across critical sectors.
What excites me most is that we're just scratching the surface of what's possible with edge AI. We're enabling customers and enterprises to maximize the benefits of AI in verticals and use cases that were unthinkable before. It's an exciting time, and we're paving the path for a more intelligent edge across industries from defense to manufacturing.
Many manufacturers tell us their AI pilots work but become too expensive to scale. How do you identify when edge AI becomes the more cost-effective approach?
When manufacturers begin exploring AI pilots, they focus solely on accuracy and ensuring AI works effectively for that particular use case. They ignore everything else—infrastructure costs, scaling aspects. They put in all the resources they can and make that pilot extremely successful.
The moment you try to scale it, it's like trying to be a gourmet chef and creating something truly gourmet in your kitchen, then wanting to scale this to 2,500 fast-food restaurants. It's not going to work.
During pilots, you have all the resources—engineers, servers, hardware—and you'll make it work because you're putting your utmost attention on it. But when you try to replicate that across 100, 200, or 500 locations, it falls on its face.
Edge computing isn't always about placing sensors and compute devices everywhere, either. You must consider the cost of wiring and managing hundreds of small devices versus centralized compute. Think about changing batteries in smoke alarms throughout your house—they don't all go off at the same time, so you can't just set aside one weekend to change them all.
You need a scaled approach. For example, can you feed 10 camera feeds into one small compute unit instead of having 10 separate compute units with each camera? You have to find the middle ground—you can't do everything in the cloud, nor can you deploy small sensors everywhere. The key is finding that balance where edge computing becomes cost-effective.
Beyond cost savings, what performance improvements are manufacturers seeing when they move AI processing to the edge?
Manufacturers see several key improvements:
- Enhanced productivity
- Better predictive maintenance capabilities
- Reduced downtime
- Increased reliability
Reliability is a particularly critical factor that manufacturers focus on. When you can process data locally and respond immediately to issues, you maintain more consistent operations and prevent cascading problems that could affect the entire production line.
What advice would you give to a manufacturing executive who wants to transition from cloud-based AI pilots to scalable edge deployments?
Always start from your use case. If you have mission-critical applications that require instant responses, you need to look at edge computing. If you're doing batch processing or overnight operations where time criticality isn't a factor, you can keep those in the cloud.
Here's a practical example: Consider metal stamping presses used in automotive or metal fabrication. These presses are critical components in the manufacturing pipeline. If a part isn't pressed correctly, it won't fit properly, and your assembly line gets jammed.
If you're monitoring that machine using AI—using time series models to ensure the press comes down at the right time and brakes at the right time—you need a mission-critical AI system running at the edge for instant responses to identify problems and stop immediately.
On the other hand, if you're analyzing how trucks are loaded post-production or how raw materials come in, that may not be mission-critical because it's happening in batch mode. Those processes can remain in the cloud.
The key is matching your deployment strategy to your operational requirements.
As this technology matures, what should manufacturers expect over the next few years?
The North Star for every manufacturer is "lights out manufacturing"—where raw materials come in one end and finished products come out the other without anyone touching them, similar to how computer automation processes a file and generates a report.
To achieve that ultimate goal, manufacturers must work through multiple steps. Some processes may never reach 100% automation, but the question becomes: What can we do to achieve automation within different processes? That's where AI comes in.
This requires edge AI because it requires local processing and intelligence. Think of it like cybersecurity maturity levels—you don't achieve level five cyber maturity from day one. You start at level one, then progress to levels two, three, and beyond.
If level five represents complete autonomy and lights-out manufacturing, manufacturers need to scale their organizations and implement processes that move them from level one autonomy to level four or five. This progression will happen gradually as AI capabilities mature and manufacturers build confidence in automated systems.
Jags writes more about the criticality of the edge in the blog: The Performance Imperative: Why Edge AI Is Becoming Mission-Critical.
Jags Kandasamy is the CEO and Co-founder of Latent AI. With over two decades of experience in embedded systems and computing, Jags has successfully built and exited a previous AI startup and holds multiple patents in edge intelligence. He is passionate about solving the environmental challenges of AI deployment while making artificial intelligence accessible across industries without compromising performance or security. Jags holds multiple patents in distributed computing and is a regular speaker at industry conferences on edge computing, AI optimization, and sustainable technology deployment.
The content & opinions in this article are the author’s and do not necessarily represent the views of ManufacturingTomorrow
Comments (0)
This post does not have any comments. Be the first to leave a comment below.
Featured Product
