The results of this research have been applied to SQLite already and were released in v3.38.0.
tl;dr: Bloom filters were great because: minimal memory overhead, goes well with SQLite’s simple implementation, and worked within existing query engine.
Bloom filters can be handy. They’re worth learning even if you don’t care about SQLite.
Neat. Most of that went over my head but always good to see more performance out of existing tech.
Basically it’s just an optimization of a double nested for loop. It’s a way to avoid running the inner for loop when it is known there will be no hit.
This is useful when we for example want to find all product orders of customers in a particular country. The way we can do this is to first filter all customers by their country, and then match orders by the remaining customers. The matching step is the double for loop.
Something like this:
for order in orders: for customer in customers_in_country: if order.customer_id == customer.id: …
Many orders won’t match a customer in the above query, so we want to single out these orders before we run the expensive inner for loop. The way they do it is to create a cache using a Bloom filter. I’d recommend looking it up, but it’s a probabilistic cache that’s fast and space efficient, at the cost of letting through some false positives. With this particular use case it’s ok to have some false positives. The worst thing that can happen is that the inner for loop is run more times than necessary.
The final code is something like this:
bloom_filter = create_bloom(customers_in_country) for order in orders: if bloom_filter.contains(order.customer_id): for customer in customers_in_country: if order.customer_id == customer.id: …
That’s certainly not how I would implement any of this in Python.