February 12, 2026
Nosana Partners with Sallar to Expand the Frontiers of Distributed Compute
The way compute is organized is changing. Millions of everyday devices are already participating in global networks, and today we are excited to announce a strategic partnership between Nosana and Sallar that brings together mobile based computing with high performance GPU infrastructure to create a more flexible and scalable ecosystem for AI builders.
Powering a New Wave of Networked Compute
Sallar is building an innovative network that turns smartphones into active participants in a new digital economy. By unlocking the idle computing power of billions of mobile devices, Sallar gives people a way to contribute to real AI workloads while earning rewards for their participation.
This approach makes compute more accessible, more distributed, and more community driven than traditional centralized systems.
Expanding Possibilities with Nosana
As AI applications grow more sophisticated, infrastructure needs are becoming increasingly diverse. No single type of compute is optimal for every stage of the AI lifecycle, different tasks require different trade-offs between cost, speed, scale, and latency.
Some workloads are perfectly suited for large pools of distributed mobile devices such as:
- Web crawling and data collection
Cleaning and preprocessing raw data
Running small LLMs and lightweight AI tasks
Generating embeddings in batches - Filtering, chunking, and preparing data for AI systems
These tasks benefit from massive scale and steady throughput rather than peak performance per job. Sallar’s mobile network is particularly effective here, allowing builders to process large volumes of data efficiently using widely distributed devices.
However, other workloads demand significantly more compute power and lower latency, such as:
- Running larger AI models for reasoning and generation
- Powering AI agents that coordinate multi-step processes
- Reranking large candidate sets
- Running vision models on images and documents
- Serving real-time AI applications with consistent performance
These tasks require specialized hardware, high bandwidth, and predictable performance, something that lightweight devices alone cannot reliably provide.
This is where Nosana comes in.
Through this partnership, Nosana provides a high-performance GPU layer that complements Sallar’s mobile network. Instead of replacing Sallar’s approach, Nosana extends it — enabling a seamless flow between lightweight, distributed processing and heavy-duty AI compute. Builders can use each tier for what it does best without needing to redesign their systems or manage separate, disconnected infrastructures.
Together, the two networks create a clear two-tier system:
- Small-device layer (Sallar): Ideal for ingestion, preprocessing, and scalable data tasks that require broad distribution and cost efficiency.
- GPU layer (Nosana): Designed for heavy models, interactive agents, real-time inference, and performance-critical workloads.
This layered architecture makes the overall system more flexible, cost-effective, and capable of supporting a wider range of AI use cases - from data preparation to production-grade AI services, within a single, connected ecosystem.
What This Means for Builders and Users
For Sallar’s ecosystem, this collaboration unlocks
• Access to high performance compute alongside mobile based resources
• Greater flexibility in designing and scaling AI applications
• A more robust infrastructure stack that can support a wider variety of workloads
For Nosana, this partnership strengthens its role as a core compute provider for next generation AI infrastructure, supporting innovative networks and real world applications.
A Shared Vision
Both Nosana and Sallar believe in a future where compute is widely distributed, accessible, and economically inclusive. By combining Sallar’s massive mobile network with Nosana’s GPU infrastructure, this partnership moves that vision forward in a practical, builder friendly way.
We are excited to see what developers and companies will create at the intersection of these two networks.
Stay tuned, more to come.
Useful Links