As reported in my last blog, Stefan was having much greater success with his pgbench results than I. In reviewing why, we found a problem with the hardware. What I like about this problem is that the results in the previous blog post become more interesting. As a reminder I was running 16 connections over 4 different users at 1M transactions. Below is the results from a single user from that batch:
pghost: pgport: 6000 nclients: 4 nxacts: 1000000 dbName: bench transaction type: TPC-B (sort of) scaling factor: 100 number of clients: 4 number of transactions per client: 1000000 number of transactions actually processed: 4000000/4000000 tps = 101.024360 (including connections establishing) tps = 101.024392 (excluding connections establishing)Over 16 connections we were getting ~ 400 TPS. I verified that this was consistent by running a second test with a single user and 4 connections. The results:
pghost: localhost pgport: 6000 nclients: 4 nxacts: 1000000 dbName: bench transaction type: TPC-B (sort of) scaling factor: 100 number of clients: 4 number of transactions per client: 1000000 number of transactions actually processed: 4000000/4000000 tps = 404.021738 (including connections establishing) tps = 404.022316 (excluding connections establishing)So, what is it that causes a machine with plenty of resources to perform in such a consistently slow manner?. You can only write data as fast and the spindles turn. That is why they invented cache. The results should look very similar to Stefan's once we replace the battery cache.