I like what it's trying to do, but I disagree with the approach. I have a few issues with the implementation:
#1 Dexter relies solely on the Postgres cost model.
Dexter works by using HypoPG[0] to create a hypothetical version of every possible index with two or less columns. It runs EXPLAIN against every query and keeps track of which indexes are used. This determines all the indexes that are part of a plan that Postgres _thinks_ will be the cheapest.
The problem with this is Postgres is often wrong about which plan is best. If there is any disparity between the stats Postgres has, or if there is an issue with the Postgres cost model, the recommendations HypoPG will make may not match reality.
I think any good optimization advice has to be based on how the queries actually perform.
#2 Dexter will only check a small set of potential optimizations.
Dexter only looks at creating one and two column b-tree indexes. This is a really small set of all possible optimizations. Dexter will not look for things like:
- Config Changes
- Gin Indexes
- Partial Indexes
- Table Partitioning
- Query Rewriting (Dexter wouldn't have caught the issue mentioned in the OP)
I think Dexter may be an ok starting point if you have never touched an index before. I don't think Dexter would have caught any of the major optimizations I've implemented.
#3 Dexter is a fully automated solution.
I don't think fully automated systems will be better than humans anytime within the next few decades. I think you can get some basic recommendations from an automated solution, but any good advice is going to have to come from a human.