Show HN: DBOS transact – Ultra-lightweight durable execution in Python

github.com

89 points · jedberg · 126 days ago

Hi HN - DBOS CEO here with the co-founders of DBOS, Peter (KraftyOne) and Qian (qianli_cs). The company started as a research project of Stanford and MIT, and Peter and Qian were advised by Mike Stonebreaker, the creator of Postgres, and Matei Zaharia, the creator of Spark. They believe so strongly in reliable, serverless compute that they started a company (with Mike) to bring it to the world!

Today we want to share our brand new Python library providing ultra-lightweight durable execution.

https://github.com/dbos-inc/dbos-transact-py

Durable execution means your program is resilient to any failure. If it is ever interrupted or crashes, all your workflows will automatically resume from the last completed step. If you want to see durable execution in action, check out this demo app:

https://demo-widget-store.cloud.dbos.dev/

Or if you’re like me and want to skip straight to the Python decorators in action, here’s the demo app’s backend – an online store with reliability and correctness in just 200 LOC:

https://github.com/dbos-inc/dbos-demo-apps/blob/main/python/...

Don't want to keep reading and just try it out:

https://console.dbos.dev/launch

No matter how many times you try to crash it, it always resumes from exactly where it left off! And yes, that button really does crash the app.

Under the hood, this works by storing your program's execution state (which workflows are currently executing and which steps they've completed) in a Postgres database. So all you need to use it is a Postgres database to connect to—there's no need for a "workflow server." This approach is also incredibly fast, for example 25x faster than AWS Step Functions.

Some more cool features include:

* Scheduled jobs—run your workflows exactly-once per time interval, no more need for cron.

* Exactly-once event processing—use workflows to process incoming events (for example, from a Kafka topic) exactly-once. No more need for complex code to avoid repeated processing

* Observability—all workflows automatically emit OpenTelemetry traces.

Docs: https://docs.dbos.dev/

Examples: https://docs.dbos.dev/examples

We also have a webinar on Thursday where we will walk through the new library, you can sign up here: https://www.dbos.dev/webcast/dbos-transact-python

We'd love to hear what you think! We’ll be in the comments for the rest of the day to answer any questions you may have.


30 comments
jedberg · 126 days ago
Hey all, I'm excited to be the new CEO of DBOS! I'm coming up on my one month anniversary. I joined because I truly believe DBOS is solving a lot of the main issues with serverless deployments. I still believe that Serverless is the way of the future for most applications and I'm excited to make it a reality.

Ask me anything!

bb01100100 · 126 days ago
Would it be correct to say the these client libraries provide the functionality (eg ease of transactions, once only, recovery) whereas your cloud offering solves the scaling / performance issues you’d hit trying to do this with a regular pg compatible DB?

I do a lot of consulting on Kafka-related architectures and really like the concept of DBOS.

Customers tend to hit a wall of complexity when they want to actually use their streaming data (as distinct from simply piping it into a DWH).. being able to delegate a lot of that complexity to the lower layers is very appealing.

Would DBOS align with / complement these types of Kafka streaming pipelines or are you addressing a different need?

Show replies

rtcoms · 126 days ago
Recently I came to know about https://www.membrane.io/, which also follows similar approach, but it looks like that is more for internal apps and small projects.

How would you compare DBOS with that ?

Show replies

ashwindharne · 126 days ago
I've been using Temporal recently for some long-running multi-step AI workflows -- helps me get around API flakiness, manage rate limits for hosted models, and manage load on local models. It's pretty cool to write workers in different languages and run them on different infra and have them all orchestrate together nicely. How does DBOS compare -- what are the core differences?

From what I can tell, the programming model seems to be pretty similar but DBOS doesn't require a centralized workflow server, just serverless functions?

Show replies

sim7c00 · 126 days ago
it might be interesting to look at some standard for workflows like CACAO to express what a workflow is. that way, workflows can ultimately become shareable between such workflow execution engines, and have common workflow editors. its (in cyber) a big problem that workflows cannot be shared between different systems which adds great costs to implementing such a system (need to redesign or design all workflows from the ground up). I think workflows and easy editors to assemble and connect steps are a good step ahead in any automation domain, but everywhere people want to reinvent the wheel of expressing what a workflow is.

definitely a fan of what these types of systems can do in replay/recovering and retying steps etc. as well as centralizing a lot of didferent workloads to a common execution engine.