We’re very excited to release Pyston v2, a faster and highly compatible implementation of the Python programming language. Version 2 is 20% faster than stock Python 3.8 on our macrobenchmarks. More importantly, it is likely to be faster on your code. Pyston v2 can reduce server costs, reduce user latencies, and improve developer productivity.
Pyston v2 is easy to deploy, so if you’re looking for better Python performance, we encourage you to take five minutes and try Pyston. Doing so is one of the easiest ways to speed up your project.
Pyston v2 provides a noticeable speedup on many workloads while having few drawbacks. Our focus has been on web serving workloads, but Pyston v2 is also faster on other workloads and popular benchmarks.
Our team put together a new public Python macrobenchmark suite that measures the performance of several commonly-used Python projects. The benchmarks in this suite are larger than those found in other Python suites, making them more likely to be representative of real-world applications. Even though this gives us a lower headline number than other projects, we believe it translates to better speedups for real use cases. Pyston v2 still shows sped-up performance on microbenchmarks, being twice as fast as standard Python on tests like chaos.py and nbody.py.
Here are our performance results:
|CPython 3.8.5||Pyston 2.0||PyPy 7.3.2|
|flaskblogging warmup time ||n/a||n/a||85s|
|flaskblogging mean latency||5.1ms||4.1ms||2.5ms|
|flaskblogging p99 latency||6.3ms||5.2ms||5.8ms|
|flaskblogging memory usage||47MB||54MB||228MB|
|djangocms warmup time ||n/a||n/a||105s|
|djangocms mean latency||14.1ms||11.8ms||15.9ms|
|djangocms p99 latency||15.0ms||12.8ms||179ms|
|djangocms memory usage||84MB||91MB||279MB|
|mypy speedup||1x||1.07x ||unsupported|
|PyTorch speedup||1x||1.00x ||unsupported|
|PyPy benchmark suite ||1x||1.36x||2.48x|
 Warmup time is defined as time until the benchmark reached 95% of peak performance; if it was not distinguishable from noise it is marked “n/a”. Only post-warmup behavior is considered for latency measurement.
 mypy and PyTorch don’t support automatically building their C extensions from source, so these Pyston numbers use our unsafe compatibility mode
 The PyPy benchmark suite was modified to only run the benchmarks that are compatible with Python 3.8
In our targeted benchmarks (djangocms + flaskblogging), Pyston v2 provides an average 1.22x speedup for mean latency and an 1.18x improvement for p99 latency while using a just few more megabytes per process. We have not yet invested time in optimizing the other benchmarks.
“p99 latency” is the upper 99th percentile of the response-time distribution, and is a common metric used in web serving contexts since it can provide insight into user experience that is lost by taking an average. PyPy’s high p99 latency on djangocms comes from periodic latency spikes, presumably from garbage collection pauses. CPython and Pyston both exhibit periodic spikes, presumably from their cycle collectors, but they are both less frequent and much smaller in magnitude.
The mypy and PyTorch benchmarks show a natural boundary of Pyston v2. These benchmarks both do the bulk of their work in C extensions which are unaffected by our Python speedups. We natively support the C API and do not have an emulation layer, so we are still able to provide a small boost to mypy performance and do not degrade pytorch or numpy performance. Your benefit will depend on your mix of Python and C extension work.
We’re planning on going into more detail in future blog posts, but some of the techniques we use in Pyston v2 include:
- A very-low-overhead JIT using DynASM
- General CPython optimizations
- Build process improvements
Since Pyston is a fork of CPython, we believe it is one of the most compatible alternative Python implementations available today. It supports all the same features and C API that CPython does.
While Pyston is identically functional in theory, in practice there are some temporary compatibility hurdles for any new Python implementation. Please see our wiki for details.
Pyston v2.0 is immediately available as a pre-built package. Currently, we have packages for Ubuntu 18.04 and 20.04 x86_64. If you would like support for a different OS, let us know by filing an issue in our issue tracker.
Trying out Pyston is as simple as installing our package, replacing
pyston3, and reinstalling your dependencies with
pip-pyston3 install (though see our wiki for a known issue about setuptools). If you already have an automated build set up, the change should be just a few lines.
Our plan is to open-source the code in the future, but since compiler projects are expensive and we no longer have benevolent corporate sponsorship, it is currently closed-source while we iron out our business model.
We are designing Pyston for developers and love to hear about your needs and experiences. So, we’ve set up a Discord server where you can chat with us. If you’d like a commercially-supported version of Pyston, please send us an email.
We’ve optimized Pyston for several use cases but are eager to hear about new ones so that we can make it even more beneficial. If you run into any problems or instances where Pyston does not help as much as expected, please let us know!
We designed Pyston v1 at Dropbox to speed up Python for its web serving workloads. After the project ended, some of us from the team brainstormed how we would do it differently if we were to do it again. In early 2020, enough pieces were in place for us to start a company and work on Pyston full-time.
Pyston v2 is inspired by but is technically unrelated to the original Pyston v1 effort.
We’re on a mission to make Python faster and have plenty of ideas to do so. That means we’re actively looking for people to join the team. Let us know if you’d like to get involved. Otherwise stay tuned for future releases and reach out if you have any questions!
2 thoughts on “Pyston v2: 20% faster Python”
Thank you for the exciting news. A couple of questions:
– how are packages / dependencies managed?
– does it support with conda/miniconda?
It is really cool to read that you guys are back working on this again! Looking forward to future releases and the open sourcing of the code.