The idea is not bad but it sounds a bit extreme... Indeed choosing the right sorting algorithm is important and depends on one's problem to solve. But, 99.99% of the time, a general, all-round algorithm will do the job within 99% of the best-one's performance...
But when it fails, well, it will fail big... In my experience, some devs (me included :-)) make tests with a reduced dataset 'cos they can iterate faster and then go in production without testing on a real-size dataset. All of a sudden, the o(n²) edge case becomes, well, a problem :-)
Reminds me of the time some dev[0] accidentally recreated the traveling salesman problem, created unit test with 5 items in it, said "looks good", and pushed it.
If you ignore linear/non-comparison algorithms like radix or bucketsort. Those are much harder to optimize to your particular problem but should result in significantly lower run times when properly adapted.