This is the fascinating story of how researchers used Bloom filters cleverly to make SQLite 10x faster for analytical queries. These are my five-minute notes on the paper SQLite: Past, Present, and Future
Basically it’s just an optimization of a double nested for loop. It’s a way to avoid running the inner for loop when it is known there will be no hit.
This is useful when we for example want to find all product orders of customers in a particular country. The way we can do this is to first filter all customers by their country, and then match orders by the remaining customers. The matching step is the double for loop.
Something like this:
for order in orders:
for customer in customers_in_country:
if order.customer_id == customer.id:
…
Many orders won’t match a customer in the above query, so we want to single out these orders before we run the expensive inner for loop. The way they do it is to create a cache using a Bloom filter. I’d recommend looking it up, but it’s a probabilistic cache that’s fast and space efficient, at the cost of letting through some false positives. With this particular use case it’s ok to have some false positives. The worst thing that can happen is that the inner for loop is run more times than necessary.
The final code is something like this:
bloom_filter = create_bloom(customers_in_country)
for order in orders:
if bloom_filter.contains(order.customer_id):
for customer in customers_in_country:
if order.customer_id == customer.id:
…
It’s just example code to demonstrate the idea of the optimization explained in the article. I also based my code on the code used in the article (and made some major changes to better fit my attempt of explanation).
Basically it’s just an optimization of a double nested for loop. It’s a way to avoid running the inner for loop when it is known there will be no hit.
This is useful when we for example want to find all product orders of customers in a particular country. The way we can do this is to first filter all customers by their country, and then match orders by the remaining customers. The matching step is the double for loop.
Something like this:
for order in orders: for customer in customers_in_country: if order.customer_id == customer.id: …
Many orders won’t match a customer in the above query, so we want to single out these orders before we run the expensive inner for loop. The way they do it is to create a cache using a Bloom filter. I’d recommend looking it up, but it’s a probabilistic cache that’s fast and space efficient, at the cost of letting through some false positives. With this particular use case it’s ok to have some false positives. The worst thing that can happen is that the inner for loop is run more times than necessary.
The final code is something like this:
bloom_filter = create_bloom(customers_in_country) for order in orders: if bloom_filter.contains(order.customer_id): for customer in customers_in_country: if order.customer_id == customer.id: …
That’s certainly not how I would implement any of this in Python.
It’s just example code to demonstrate the idea of the optimization explained in the article. I also based my code on the code used in the article (and made some major changes to better fit my attempt of explanation).