Introducing Seek v0.1 – training-free semantic search for robots in real buildings. Learn how we think about navigation

Semantic search for robots

Intuitive visual search for robots

Seek is a training-free semantic navigation layer for robots. Give it a simple phrase – “blue trolley in bay 4”, “visitor kiosk in atrium B” – and it guides your robot through real buildings as if it has worked there for years.

No site-specific training runs. No custom detectors per deployment. Just natural language in, and reliable search behaviour out.

Built for labs, warehouses, hospitals, and campuses where layouts shift, inventory moves, and robots need to adapt in real time.

[ SESSION ] [ SEARCH ] [ ACTIVITY ]
::..
.-:.
.::.
..:  ..
: .:.:  ..
:..  .  .::.
:.  .--..
.::.  .  .::.
. :.  .  ..  .:.
 -...:...::  :..
..:  .  .:.  ....
[ 200 OK ]  Seek

# semantic-search session

target: "blue trolley in bay 4"
environment: ward corridor
mode: training-free

> starting search...
> narrowing down promising areas...
> suggesting safe approach pose...
Helps robotics teams deliver • Faster recovery runs • Fewer manual searches • Clearer robot behaviour • Reusable search logic

Turn your buildings into language-searchable space

Instead of “go to coordinate X”, ask your robot to “go find that thing” – and let Seek handle the rest.

Search

Point the robot at a simple phrase like “blue trolley in bay 4” and let Seek decide where to look first, next, and when it’s close enough to stop.

Main Features //

Search that feels natural to humans, reliable for robots

Seek gives robots a sense of what they’re looking for, where it might be, and when they’re close enough — all from a simple phrase.

Understands natural descriptions

Describe targets the way teams already talk: “spill kit in bay 4”, “equipment trolley for theatre 2”, “scanner cart 3”.

Remembers where it has looked

Seek keeps track of what the robot has already seen, so it doesn’t keep circling the same empty corner when there are better places to check.

Moves with intent

The search behaviour focuses time and motion on areas that matter, instead of blindly sweeping every corridor just to “cover the map”.

Developer First //

Add “find-by-name” in a single loop

Seek plugs into the stack you already have. You keep SLAM, planning, and safety. We provide the search behaviour and high-level goals.

Works with ROS 2, simulators, custom frameworks, and real robots. No changes to your low-level control required.

Python C++ ROS 2
[ ROBOT ] [ SEARCH ] [ API ]
from seek import Seek

seek = Seek(api_key="ss-YOUR_API_KEY")

# Start a natural-language search
session = seek.search_start(
    target="blue trolley in bay 4",
    pose=robot.pose(),
    frame=rgbd.latest_frame(),
)

# Ask Seek where to look next
while not session.done:
    waypoint = seek.search_next(session.id)
    nav.go_to(waypoint)

    # Stream updates as the robot moves
    seek.search_update(
        session_id=session.id,
        pose=robot.pose(),
        frame=rgbd.latest_frame(),
    )

# When Seek confirms the target
result = seek.search_verify(session.id)
if result.found:
    nav.go_to(result.approach_pose)

Integrations //

Drop into the stack you already run

Seek fits alongside your existing tools instead of replacing them.

  • • Use Seek waypoints as goals for your existing planners.
  • • Prototype everything in simulation before touching real hardware.
  • • Export search runs as traces for debugging, demos, and reports.
  • • Hook logs into your current observability and monitoring stack.

Foundations //

Built for dynamic environments, not static maps

Seek is designed around the messy reality of real buildings: things move, people rearrange spaces, and robots arrive after everything has changed.

Training-free by default

No per-site retraining just to make “find the trolley” work. Seek aims to generalise across locations from day one.

Platform-agnostic

Any platform that can localise and stream a camera feed can use Seek — AMRs, mobile bases with arms, quadrupeds, and more.

Observable behaviour

Search runs produce clear traces, so teams can understand how a robot looked for something and why it chose the paths it did.

Proof //

Evidence-driven pilots, not one-off demos

We focus on measurable outcomes in real environments — so stakeholders can approve rollout based on evidence, not hype.

Time-to-find

How quickly a robot can locate the target under normal site conditions.

Success rate

How often the robot confirms the correct target and reaches a safe approach pose.

Intervention reduction

How much manual searching and recovery work is removed from the workflow.

Features //

We handle the search behaviour so you don’t have to

Instead of writing a new script every time you want a robot to “go and find X”, you call Seek and reuse the same behaviour across fleets and sites.

Language-driven goals

Move from internally-named waypoints to natural phrases that operations teams actually use.

Consistent search patterns

Get predictable, repeatable search behaviour instead of handcrafted routines that differ per robot.

Safe approach poses

When a target is found, Seek proposes sensible stand-off distances for inspection, handover, or docking.

Replayable traces

Save runs as artefacts you can replay, compare over time, or use as evidence for pilots and audits.

Use cases //

From “where is it?” to repeatable robot routines

A few ways teams use Seek in practice.

Recovery runs

When inventory and reality disagree, send a robot to “find pallet 18B” or “locate scanner cart 3” and bring back evidence.

Guided navigation

Help visitors and staff with instructions like “walk to the visitor kiosk in atrium B” or “go to the charging bay for unit 12”.

Asset search & inspection

Let robots look for mobile assets – trolleys, carts, spill kits – that don’t stay in fixed bays.

Research & teaching

Run language-driven navigation experiments without building a search behaviour from scratch every semester.

Contact //

Talk to us about your robot and environment

If you have a robot operating in a real building (or a simulator setup you want to validate), we can discuss fit, integration requirements, and what a measurable proof-of-value looks like.

What to include

Robot platform, sensors (RGB-D/LiDAR), your nav stack (ROS2/Nav2 etc), and example “find-by-name” queries.

Typical outcomes

A repeatable search behaviour + logs/metrics to support internal approval for rollout.

Deployment options

Hosted API for early evaluations. On-prem/VPC discussions for larger deployments.

FAQ //

Frequently asked questions

Everything you need to know about how Seek fits into your robots and your stack.

What is Seek?

Seek is a semantic navigation layer that lets robots find things in real buildings using natural language, not just coordinates.

Do we need to train it for each site?

No. Seek is designed to work across sites without per-environment training loops. You may tune some parameters for your platform and use case, but not retrain from scratch.

Which robots are compatible?

Any platform that can estimate its pose and stream a camera feed — AMRs, mobile manipulators, quadrupeds, and indoor drones.

Does Seek replace our navigation stack?

No. You keep navigation, control, and safety. Seek provides high-level search behaviour and goal suggestions that plug into your existing stack.

Can Seek run on-prem?

Early deployments run as a hosted API. For larger rollouts, we’re open to discussing on-prem or VPC setups based on security and operational requirements.

How do we get started?

Contact us with your robot platform, environment, and a few example targets you want the robot to find. We’ll confirm integration fit, define success metrics, and propose a short proof-of-value plan.