Spatial intelligence company Wherobots today announced the launch of a private preview of RasterFlow, a satellite image preparation and inference solution that will make it easier to gain insights from that type of data.
“RasterFlow is a new compute engine that is going to help feed data about the physical world to all sorts of different types of applications, but then also make it so that we can process it and serve other applications as well,” said Ben Pruden, head of go-to-market at Wherobots.
By streamlining this process, customers will be able to run AI models on physical world data to get answers to physical world questions, such as predicting fields and their boundaries from an overhead view of farmland.
According to the company, most data teams don’t have the expertise or budget to build out the infrastructure needed for working with spatial data. For instance, a team looking to analyze wildfire state and predicted spread to manage risk to buildings would need to ingest and prepare images for inference; deploy an ML model on those images; tune model inference; measure change over time and forecast spread using models that factor in wind direction, speed, vegetation, buildings, and other infrastructure; join model predictions with other context like building footprints or powerlines; and forecast the spread of the fire.
RasterFlow attempts to address these challenges by offering fully managed operations for imagery ingestion, preparation, and inference, with built-in support for popular open datasets and models.
Ingestion and preparation involves tasks like removing cloud cover or edge effects, and building a high quality inference-ready mosaic, the company explained.
Inference results can be converted into vector geometries and then integrated into lakehouse architectures, or stored in Apache Iceberg tables as Parquet files. Users can also leverage WherobotsDB or other lakehouse engines to postprocess the results.
Wherobots believes RasterFlow is helping set data teams up for a future where applying AI to spatial data is more common. Pruden explained that there are foundation models being built today for extracting and creating embeddings based on spatial data, but there is still work to be done to get applications to be able to actually consume them.
“We’re just at the very beginning of this long marathon of interesting applications that are going to be developed during the next few years, I think,” Pruden said.
For example, he said that he’s seen Meta start to talk more about physical systems and Jeff Bezos’ new Prometheus project is all about applying AI to physical tasks.
“Ultimately, these are all problems about running applications or infrastructure in the physical world,” said Pruden. “It’s a very early time for this to get developed, but the people that are really thinking deeply about the needs of where AI and systems are going are already actively shifting their time into it. And we’ve been building in this space already, so we kind of feel like we’re ahead of the game, but we’re also excited to see the momentum picking up over the market overall.”
