Upgrading Homebrew-installed Postgres 9.3 to 9.5

I didn’t read the instructions when I let PostgreSQL get upgraded from 9.3 to 9.5 with brew upgrade. This is what I had to do to migrate my data after I had already upgraded:

# Switch back to the previous version of postgres, postgis
brew switch postgres 9.3.5_1
brew switch postgis 2.1.4_1
# Start the server (run this in a separate shell)
postgres -D /usr/local/var/postgres
# Dump all my databases
pg_dumpall > pg_dump
# Stop the server
pg_ctl -D /usr/local/var/postgres stop
# Switch back to the up-to-date version of postgres, postgis
brew switch postgresql 9.5.0
brew switch postgis 2.2.1
# Move the old data directory out of the way
mv /usr/local/var/postgres/ /usr/local/var/postgres9.3
# Initialize the data directory for Postgres 9.5
initdb /usr/local/var/postgres
# Start the server
postgres -D /usr/local/var/postgres
# Import the database dump
psql -d postgres -f pg_dump
# Delete the dump file
rm pg_dump
# Stop the server
pg_ctl -D /usr/local/var/postgres stop
# Start the server with launchctl
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist

Floodlight

Floodlight is a web-based platform for telling community stories from people, places and organizations around the Denver metropolitan area.  The project was originally supported by The Piton Foundation and funded in part by a grant received as part of the Knight Foundation Community Information Challenge.

Role

Lead developer

Details

Since many of the stories told on Floodlight are about work being done in particular Denver, Colorado neighborhoods, authors needed to be able to tag their stories with boundary geographies (cities, neighborhoods, zip codes) or provide addresses of specific places, which are automatically geocoded and associated with boundary geographies.

For this project, I relied on the excellent spatial data support of the PostGIS database and abstractions offered by the Django Frameworks ORM to implement a data model and create scripts to easily load new boundary geographies from shapefiles and define their relationships with other boundaries.

Users searching for stories in their community can use a faceted browsing interface that not only filters based on address and boundary geography, but also other taxonomies.  In order to integrate these different types of filters, I used the Solr search server.  On the front end, the Backbone framework provides an interface to the faceted browse and the Leaflet mapping library is used to display boundary geographies and story markers based on the user’s filters.

The application is powered by a RESTful API that is consumed by a Backbone-based story builder.

In addition to the web platform, the project also included a great capacity-building component.  Team members conducted “story raising” events that trained community members in different digital storytelling skills and a “story navigator” worked with community groups to discover and tell stories using the platform.

More Information

Screenshots

Home_Floodlight_-_2014-01-15_22.07.30

Explore_Floodlight_Floodlight_-_2014-01-15_22.11.14

Floodlight_Why_Stories_Matter_A_Reflection_on_the_Denver_Camping_Ban_and_the_Conversation_around_Homelessness_-_2014-01-15_22.14.54

Draw the Line Midwest

Open source mapping software allowed users to view state and federal legislative districts and draw their own district plans during the 2010 redistricting process.

Role

Integrating developer

Details

I deployed and customized instances of the open source District Builder software for a number of Midwestern states.  Users could view existing districts or create new district maps based on block and tract census geographies.  Some instances provided a platform for competitions to see if users could create more balanced district plans based on political, racial and population demographics.

In order to minimize hosting costs, the instances were deployed on a single server with resources provisioned to virtual hosts using VMWare.  In addition to the provisioning and hosting, I provided data preprocessing and cleaning in order to add additional demographic data to use to evaluate district plans.

drupal contemplate module with postgres

I’ve been working on setting up a Drupal instance for work and have, for the first time been using Postgres for the database instead of MySQL. I was installing the Content Template module and got errors when the module tried to create the tables by default. I think the problem is with the fact that the CREATE statements in contemplate.install use MySQL-only data types.  It seems that others have reported this problem as there is an issue on the Drupal site at http://drupal.org/node/172775 .

I was able to create working tables manually with the following syntax:

CREATE TABLE contemplate (
    type character(32) DEFAULT '' NOT NULL,
    teaser text NOT NULL,
    body text NOT NULL,   rss text NOT NULL,
    enclosure character(128) NOT NULL,
    flags integer DEFAULT 0 NOT NULL,
    PRIMARY KEY (type)
);CREATE TABLE contemplate_files (
    site character(255) NOT NULL,
    data bytea NOT NULL,
    UNIQUE (site)
);

I made a patch to contemplate.install that fixed this issue. download

Once I got past the table creation, the module still returned SQL errors when I tried to update the template for a CCK content type. This seemed to be due to a problem with the SQL syntax. I made a patch to contemplate.module that fixes this in Postgres. I don’t know if it ends up breaking MySQL in the process. download

Oh yeah. I’m using Drupal 5.5 with contemplate 5.x-1.8 and Postgres 8.1.9