In a recent column, I argued that the “NewSpace” revolution isn’t about making space simple. It’s about making it less expensive to confront the fundamental reality that designing and manufacturing reliable space hardware is bound to processes poorly suited to mid-course changes—changes that, given the complexity of the task, are almost inevitable.
In other words, mistakes in space are still expensive. Getting there cheaply hasn’t solved that problem. But what is changing is that some companies are finding ways to make iteration cheaper and smarter — without sacrificing reliability.
This is a different kind of revolution. It’s not about moving fast and breaking things. It’s about designing better tools to find your issues before they break something.
One person who saw this coming early is Jeremy Perrin, co-founder of Connektica. I’ve had the privilege of working with Jeremy for more than five years. From the very beginning, he had a clear vision: that the space sector would need enabling infrastructure — not just flashier satellites or cheaper rockets, but the test platforms and validation tools that make dependable space systems possible at scale.
At the time, he was ahead of the curve. Now, the curve is catching up.
I spoke with Jeremy recently about what he has learned and how he is working to build exactly that kind of infrastructure — and how that approach is quietly changing the economics of complexity in space.
Iain: In the past the only way we have been able to make reliable space hardware was to rely on a robust testing program. But that has meant that our standard processes are seen as slow and expensive, right?
Jeremy: The traditional legacy approach to space hardware testing was built around the V-cycle development model, where long design phases were followed by extremely conservative testing campaigns. Engineers would test every subsystem at every stage of the assembly, integration, and testing (AIT) process, regardless of whether it was strictly necessary. This “test everything” mindset was rooted in risk-aversion, as large geostationary satellite programs—often funded by governments—had budgets of $200–300M per unit and simply could not afford in-orbit failures.
However, you’re right, this rigor came at a cost. Testing required heavy scripting, manual data collection, and deep engineering reviews, which consumed vast amounts of time and resources. The legacy primes were operating in an environment where low production volumes (1 spacecraft every 2-3 years) and high margins justified this approach. But as NewSpace emerged, with demand shifting to producing 1-5 spacecraft per week at $1–15M each, this model became prohibitively expensive and incompatible with industrial-scale production.
Iain: I have known you for about five years now, and I’ve watched you build your company around that core insight that a lot of people missed when you founded Connektica. Do you think the rest of the sector is starting to see what you saw five years ago?
Jeremy: Yes! The industry is now converging on the insight that Connektica identified early: you cannot industrialize space manufacturing with artisanal, one-off test systems.
Five years ago, when we told companies they’d need dedicated test infrastructure, they didn’t see the point. They thought: “We’ll just buy instruments, throw together a few scripts, and manage it ourselves.” But that approach quickly breaks down when you need to validate performance fast or scale production.
But, if you use your best engineers to build test systems, you take them away from design. And if you use interns, the system isn’t robust. So, neither option really works in the long-term.
Today, both small suppliers and large NewSpace integrators are recognizing that commercial, off-the-shelf Acceptance, Integration and Test platforms are critical. They free up top design engineers from writing fragile custom scripts, ensure robustness when interns or junior staff are involved, and provide a replicable system that scales with production.
Iain: OK, fair enough, but still, that “legacy” approach must have some advantages. Didn’t I hear that a recent NASA study found that NewSpace satellites are failing at a much higher rate than those built by legacy space companies using the old-style system.
Jeremy: Yes. The legacy method was (and is) extremely conservative but it did ensure system reliability. NASA and ESA data show that small satellites, especially those from startups, have a 300% higher failure rate than prime-built satellites: ~40% vs ~10%. The reason isn’t that startups are less talented, but that legacy players are still applying a system-level approach: not only validating each component, but also repeatedly testing the full spacecraft, including interfaces, software versions, and redundancies.
In contrast, many NewSpace companies focus only on validating individual subsystem performance, neglecting the end-to-end system integration. While this allows for speed and cost reduction, it introduces risks that materialize in orbit. The key takeaway is that quality and repeatability are embedded in legacy workflows. The challenge for NewSpace is to achieve that same reliability without reverting to artisanal processes that kill scalability.
Iain: And to be clear, those legacy companies are still building a lot of satellites, so they represent a significant market for any small company that wants to be a component supplier to the space business. And I am willing to believe that those same customers, the large ones, are still pretty risk averse — and for good reason. They need trustworthy suppliers and that isn’t going to change.
But those same small suppliers cannot meet that bar unless they are able to build and demonstrate a replicable test system. Which doesn’t come easy and has never come cheap. Can they afford to spend time on that? They barely have enough time to meet the delivery windows they are given; how can they find the time to invest in a quality assurance system at the same time?
Jeremy: That’s the crux of it: they don’t have the time. When you’re under pressure to deliver, the instinct is to cut corners. But we’ve seen suppliers hit a wall when volumes ramp up, with failures, rework, or even lost customers.
They eventually realize they have no choice but to invest in quality infrastructure if they want to survive. The key is embedding quality directly into the process: digital workflows, automated data capture, and compliance baked into execution. That way, quality isn’t a parallel activity that slows you down, it becomes the way you work. Competitiveness follows from that industrial process, not the other way around.
Iain: Yes, I see that a lot. It seems to me that one of the mistakes that small companies – especially startups make is that they get focused on functionality and performance, thinking that they will win the work with their technology, and after that, they can figure out how to meet the customers’ quality standards. But it doesn’t have to work that way. If you are going to be in this business, you have to develop a reputation for quality as well as performance – right from the start.
Jeremy: Exactly. Quality is the real moat for a startup in this industry. Startups are naturally under pressure to demonstrate performance fast, whether to impress investors, win contracts, or hit a launch window. As a result, they emphasize functionality and performance metrics in their prototypes, often at the expense of robust quality systems.
The problem is that without traceability and quality embedded into AIT, they struggle to learn from failures, improve designs iteratively, and maintain reliability across builds. And if you can’t prove repeatability, you won’t keep customers.
To be successful in the long term, you want to build quality in from day one. The ones who do track every operator action, every configuration, every test procedure with full version control. They collect the data they need to improve their design with each iteration. The trick is doing that without putting extra burden on operators. That’s why digital tools matter! They let you capture all that information naturally as part of execution.
Iain: Ok, you’re a small startup, you impress the customer you have always thought would be the key to your success. But now you have another problem. They want a scale of production you have never managed before. And they want the same quality, reliability and auditability that you delivered the first time. And let’s face it there is no worse fate for a startup than impressing your customer on your first contract and disappointing them on the second. How do you plan to avoid that eventuality while also scaling up your volumes?
Jeremy: That’s a very real risk. When you scale, you can’t just hire more senior engineers — you need to bring in less experienced operators and trust them to deliver at the same quality. Paper procedures or ad-hoc scripts won’t cut it. You need digital, guided processes that error-proof execution, automate test sequencing, and generate compliance reports automatically. That way, every operator, no matter their experience level, executes to the same standard. Startups that prepare this way can scale without losing the reliability that got them their first contract. It’s about building the infrastructure to grow before the demand overwhelms you.
Iain: It’s often said that innovation is about seeing the future before it arrives. But in space, the future tends to arrive slowly — and then all at once. I call Jeremy Perrin a NewSpace revolutionary because he saw where the bottlenecks would be long before most investors or customers were ready to admit they existed. And now that those bottlenecks are here, companies like Connektica aren’t just solving them — they’re changing the way the industry thinks about solving problems in the first place. Because in the end, the real NewSpace revolution isn’t about breaking the rules. It’s about understanding the constraints — and designing smarter ways to work within them.