Pandas Dataframes to Fortran Formatting

Ahhhh, Fortran.

The past few weeks I’ve had the pleasure (I think that’s the right word?) of dusting off my olde Fortran skills for several unrelated projects, one of which requires exporting of a pandas dataframe from python to a plain text CSV readable by some legacy fortran code. We wanted to speed up and simplify that code, and in the end I figured out a neat way to do it. The solutions out on the web (e.g., this SO post) generally follow the approach of iterating over every row in the dataframe, writing out each row with the appropriate formatting. While the pandas.DataFrame.to_csv() function has an option to format columns, it applies that formatting to every column and we need individual control.

So here’s a notebook (available over at github as well) to help out anyone out using python and fortran in tandem! It makes use of the lovely fortranformat package so that you can use your beloved fortran format strings exactly as you do in fortran!

[iframe src=”https://nbviewer.jupyter.org/github/chrishavlin/miscellaneous_python/blob/master/notebooks/PandasFortranFormat.ipynb” height=”1600″]

A primer on map projections in Python

Ok, so after my post a little while ago about converting contours to shapefiles, I spent a day adapting the approach for use in a in a dash-plotly app… and in general it works, and took a ~800 MB file down to a ~14 MB shapefile. But the rendering of the shapefile was still very slooooooow, and due to limitations of how data callbacks update a figure’s data in Dash, I couldn’t just render the background map once. So, after some thinking, I realized that if I “simply” calculated my own map projections, I could use the standard x-y plotting routines (which includes contours, as well as the faster scattergl method for plotting a large number of points).

But then I realized that I only vaguely know what map projections are: I knew different projections vary in how they portray the 3D earth on a 2D surface, but when you start using all the mapping libraries out there, you run into a lot of jargon really fast. So here’s a super basic primer on calculation map projections in Python that explains some of that jargon!

Also, this is the first post I decided to write as a Python notebook. The notebook is in my github learning_shapefiles repo, but github has been having some issues displaying notebooks lately, so you can view the full notebook here.

PyProj

To start off, pyproj (link), is a python library based on the command-line library https://proj.org/ that provides a large number of algorithms related to map projections. So go install that (simple to do with pip or conda).

In order to use pyproj to project a lat/lon point onto a map projection, it’s helpful to know a few acronyms:

  • CRS: coordinate reference system
  • EPSG: a database of coordinate reference systems and transformations, different projections covering different regional or local (or global) domains are specified by the number following EPSG. For example, EPSG:4326 is the global reference ellipse used by modern day GPS.
  • WGS84 : World Geodetic System, latest revision, a.k.a. WGS1984, same as EPSG:4326
  • reference ellipse: an ellipse describing the surface of the Earth on which positions (like latitude and longitude) are defined. GPS points use a reference ellipse that approximates the Earth’s geoid (i.e., the gravitational equipotential surface that sea-level would follow if due to gravity alone, see wiki).

So now that’s out of the way…. to project a single lat/lon point with Pyproj, first import pyproj then initialize a projection (Proj) object:

from pyproj import Proj
p=Proj(proj='hammer',ellps='WGS84')

When initializing the Proj (projection) object, you give it the reference ellipse that your lat/lon is defined on, ellps=‘WGS84’, and the projection you want to transform to, in this case a hammer projection. Once you’ve initialized Proj, you can use it to move from lat/lon to cartesian x,y:

lon=-120.5 
lat=42.4
x,y=p(lon,lat)

This lon/lat point become (x,y)=(-9894366.0792,5203184.81636). These values in meters by default and are the cartesian x-y distance from the map center for your point.

The Proj object is flexible enough to take lists (and numpy arrays!), so you can project many points at once. It can also take any of the parameters defined for the PROJ projections as keyword arguments. For example, the Hammer projection, has an argument ‘+lon_0’ to set the 0-longitude for the view, and so in pyProj you can use ‘lon_0’ as a keyword argument when using the hammer projection:

p = Proj(proj='hammer',ellps='WGS84',lon_0=-50)

The rest of the notebook goes through how you could construct a grid of lat/lon lines and plot them for different projections. And at the end, I take a digital elevation model of the western US (which gives an elevation for lat/lon points), project the lat/lon points onto a hammer projection  to get points of (elevation,x,y) and contour it using the standard matplotlib.pyplot.contourf to get the following:

In my particular application, I’ll now be able to project my data to x,y and then use the cartesian contour functions of dash-plotly! The main drawback of this approach is having to write all the routines for handling rotations and plotting lines on maps… but ultimately I think it will work well with Dash, and it will be nice to be able use scattergl with my map point data.

 

Shapefiles in Python: converting contours to shapes

I’m in the process of finishing up a web-app that I’ve written using Dash/Plotly and as someone who is not an expert in front end web development, I’ve really enjoyed how easy it is to build an interactive app without getting bogged down by all the javascript/html/css (though knowing some definitely helps!). But recently I’ve run into an issue while figuring how to plot various datasets as a background contour on a number of interactive maps: while Dash/Plotly is great at plotting interactive points on a map, there’s not a simple way to contour a dataset and use it as the base layer (there is the new scattermapbox class, but there is not yet a simple contouring method for maps like, for example, contourf in the basemap library).

But Dash/Plotly is very good at chloropleth (colored shapes) plots, so I figured that if I could convert a matplotlib.pyplot.contourf plot into a shapefile of polygons, I could then load that shapefile in Dash/Plotly as needed and plot it as a chloropleth plot, which seems easier to me than rendering a bitmap image of the contour plot in the background (the only other option I can think of)… and so I spent the morning experimenting with how to do this. There’s a few scattered stackoverflow questions that related to this, but I had to pull together a bunch of pieces to get this working,  so I figured a short writeup here might be useful to others out there.

So with that said, here are they python libraries that you’ll need to run my test script:

  • shapely (for polygons, used previously here, here and here)
  • fiona (for writing a shapefile)
  • descartes (for making some plots)
  • matplotlib (for the initial contouring of the data)
  • numpy (for making up some initial data and some array manipulation)

The full code is available in my learning_shapefiles repo and the new script is contourf_to_shp.py. And some useful stackoverflow answers that helped significantly: Converting Matplotlib contour objects to Shapely objects, matplotlib – extracting values from contour lines.

Creating data to contour 

To start off, I wanted to work with a small test data set that would include multiple domains at the same contour level, so I created a lat/lon grid and then superimposed two 2d gaussian curves with different amplitudes and decay rates and plotted those up:

from shapely import geometry
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
import fiona
import os,json
from descartes.patch import PolygonPatch

# create some test data with multiple peaks
lon = np.linspace(0,45,100)
lat = np.linspace(-20,32,90)
long,latg=np.meshgrid(lon,lat)
C1=np.sqrt((long-5.)**2+(latg-25)**2)/30.
C2=np.sqrt((long-30.)**2+(latg-1)**2)/10.
m = 30*np.exp(-C1**2)+20.*np.exp(-C2**2)

# make the contourf plot, storing the resulting ContourSet in cs
plt.figure(figsize=[10,5])
plt.subplot(1,2,1)
Nlevels=10
cs = plt.contourf(lon,lat,m,Nlevels,cmap='gist_heat')
plt.title('contourf figure with Nlevels='+str(Nlevels))

In my actual use-case I’ll be loading a 2D dataset, so most of this will be replaced. But the important bit is that I store the object that results from the contourf call, that cs object contains all the data on the contours.

Storing the contour results

The first step in saving the contoured data is to create a lookup table for the contour levels. The cs object stores each contour in cs.collections and the levels in cs.levels:

# create lookup table for levels
lvl_lookup = dict(zip(cs.collections, cs.levels))

Next, I loop over each contour of cs.collections, convert the contour’s shape into a shapely polygon and store it in a list of dictionaries that also saves that contour’s level value:

# loop over collections (and polygons in each collection), store in list for fiona
PolyList=[]
for col in cs.collections:
    z=lvl_lookup[col] # the value of this level
    for contour_path in col.get_paths():
        # create the polygon for this level
        for ncp,cp in enumerate(contour_path.to_polygons()):
            lons = cp[:,0]
            lats = cp[:,1]
            new_shape = geometry.Polygon([(i[0], i[1]) for i in zip(lons,lats)])
            if ncp == 0:                
                poly = new_shape # first shape
            else:
                poly = poly.difference(new_shape) # Remove the holes
            
            PolyList.append({'poly':poly,'props':{'z': z}})

The list PolyList now holds a shapely polygon as well as the z-value (or level) of that polygon.

Writing the data

In order to use these polygons elsewhere, the easiest thing to do is simply save them within a shapefile. I used the Fiona library for writing, which entails writing the geometry (the polygons) and the properties for each polygon that I stored earlier:

# define ESRI schema, write each polygon to the file
outfi=os.path.join(outname,'shaped_contour.shp')
schema = {'geometry': 'Polygon','properties': {'z': 'float'}}
with fiona.collection(outfi, "w", "ESRI Shapefile", schema) as output:
    for p in PolyList:
        output.write({'properties': p['props'],
            'geometry': geometry.mapping(p['poly'])})

In order to reconstruct the contour plot, I need to save the information about the different levels (to get the colormap correct). And while I could load the shapefile and loop over the shapes to calculate min/max values, it saves time to simply write that data to a different file. And rather than write it as metadata of some kind for the shapefile, I just drop it into a json file:

# save the levels and global min/max as a separate json for convenience
Lvls={'levels':cs.levels.tolist(),'min':m.min(),'max':m.max()}
with open(os.path.join(outname,'levels.json'), 'w') as fp:
    json.dump(Lvls, fp)

Plotting the polygons

From there, the shapefile and levels data can be re-loaded and plotted to get the following:

polygon_conts_N10On the left is the original contour plot, on the write is the plot of Polygon patches colored using the same colormap. And it’s an almost perfect replica… I’m not sure if it’s my eyes or screen, but the lightest colors seem just a hair different between the two. But that difference is even less noticeable as I use more contours, e.g. 1000: 

polygon_conts_N1000

The final bit of code is mostly self explanatory: looping over the shapes in the shapefile, then plotting colored polygon patches like in previous posts. The only tricky business is making sure the polygon facecolor matches the filled contour color. To do this, I load in the contour level json file that I saved off, and load the matplotlib colormap that I used in the initial call to contourf:

# read in levels, define colormap
with open(os.path.join(outname,'levels.json')) as jfile:
    Lvls=json.load(jfile)
levels=np.array(Lvls['levels'])
cmap=plt.cm.gist_heat
lv_range=[Lvls['min'],Lvls['max']]

And then for each polygon, I pull out the z-value that I saved and find the appropriate RGB value for that level with:

        lv=shp['properties']['z'] # this shape's level
        clr=cmap((lv - lv_range[0])/(lv_range[1]-lv_range[0]))

The bit of math with lv_range is ensure that the value is between 0 and 1. The clr variable is a tuple that can be given to the descartes PolygonPatch:

patch = PolygonPatch(poly, facecolor=clr, edgecolor=clr)

And that’s pretty much it. I still have some experimenting to do with how many contour levels I need for the datasets I’m working with, and I expect to run into some scaling issues (the full dataset fields I’m working with are 100s of MB), but at least I have a feasible approach!

Full code here.