Why Postgres? (How did I get here?)

You may ask yourself, how did I get here?

The journey to this place in my professional career as the newly hired Director of Business Development for Command Prompt, Inc. is a long and winding one, and so i'd like to share a couple of stories to enlighten curious folks:

“Why Postgres?!”

Last September the fledgling consulting firm that I was handling sales and business development for was shuttering, and my best bud and business colleague Jim Nasby and I were on the market for job opportunities. Word had gotten out, and I had an introductory call with a potential employer. An advantage of my former position at Blue Treble was the ability to engage regularly in the PostgreSQL community.

I’d been introduced to PostgreSQL long ago by Nasby, as he’s been at the core of my circle of friends who’ve contributed to my “geek by association” status over the years. Most of them had gotten to know each other as staff and contributors to distributed.net, the Internet’s first general-purpose distributed computing project. Half-listening to highly technical discussions over beers at our local favorite watering hole, I absorbed a wealth of knowledge of distributed computing and database architecture through osmosis. I’d join in heated debates, adding my own insight from evolutionary theories that could be applied to software ecosystems.

I was highly active in our #alg IRC channel, where I was encouraged to get out of Windows. Arguments would ensue over Unix versus Linux, and whether to use Debian, Ubuntu, or RedHat.

Why Postgres? It’s robust, scalable, high performance. And I’m a data geek!

I wasn’t a stranger to databases when I met my distributed.net friends over 13 years ago. I had taken computer math and science courses in high school in the late 70s. My uncle Ronnie who’d worked at Sperry Univac  -- and later had his own computer consulting company for over 20 years -- encouraged me to take computer courses at Houston Area League of PC Users, Inc. (HAL-PC) where he was an instructor. I took courses in MS-DOS, Lotus 1-2-3 and dBase III at HAL-PC in the mid-80s.

Fast-forward to December 2001, when I began working at a state regulatory agency as a drinking water quality specialist. I spent hours with a senior team member, as she taught me how to build queries and create reports in Microsoft Access 97. For ten years I managed several programs related to the Safe Drinking Water Act and its amendments, with inventory and water quality data residing in various databases. Compliance determination required importing data from the labs into our chemdata database, and then running queries and reports that would output non-compliance letters to the regulated community of public water systems.

 “But why Data? Why are YOU a geek about data?!”

I touched massive amounts of data in that job -- every. single. day. Managing multiple programs wasn’t easy, but not because of the steep learning curve of the RDBMS when I first started -- I absorbed and implemented what I learned from courses that included application development, and "dabbled" with VBA and SQL. The trouble was with the legacy systems -- historical data in a FoxPro table, with schema that didn’t map to the new data system. How were so many groundwater wells drilled on January 1, 1913?! Because the drill date could not be left as null, and that date was selected as the placeholder. (The significance being that 1913 is the year that sanitary engineer Vic Ehlers introduced chlorination to the state of Texas waters, eradicating typhoid and cholera and decreasing mortality from waterborne diseases.)

Data silos existed, because at some point a data project had been spun up and contracted out by another area of the division, and again no standardization of the schema across organizational units. Worse yet, this fragmented pattern and impact wasn’t only an internal or agency issue – data was required to be migrated quarterly into the EPA’s federal data warehouse, the Safe Drinking Water Information System (SDWIS). I didn’t envy our solo data administrator’s job. It was enough for me to handle the monthly lab data import, nervous as we hit over 1M records in a Microsoft Access database.

Oh, I forgot to mention – all of this data management was over the network, as databases were being run over a virtual machine.

Dolan Falls on the Devils River, Texas

"But... your background is water and biology??"

The challenges of compliance determination and ensuring QA/QC of sampling and data analysis in my former role as a drinking water program manager taught me to appreciate a robust data platform. The influence of my education in ecology, evolution, and conservation biology should be a no brainer for any PostgreSQL developer or long-term user. Applying rapid bioassessment methodology for rivers and streams including Devils River (seen above) as a biology undergraduate, as well as later as an environmental field technician for the City of Austin's Watershed Protection Department taught me the value of cost-effective approachies to identify water quality problems, rank sites and monitor trends. These experiences fuel my great admiration for the evolution and efficiencies of PostgreSQL, and the adaptations and new features that lend to its robustness, performance, and more.

Needless to say, the above-referenced dialogue was a non-starter, but I feel that I've successfully evolved as a PostgreSQL professional, and navigated into a harbor where I am firmly anchored. I'm excited for the role that I am fulfilling at Command Prompt as Director of Business Development, and look forward to establishing new and fostering existing relationships with both our clients and the PostgreSQL community.