diff --git a/README.md b/README.md index 60492ed..8a8684b 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # SWMMIO -v0.3.5 +*v0.3.5.dev1* [![Build status](https://ci.appveyor.com/api/projects/status/qywujm5w2wm0y2tv/branch/master?svg=true)](https://ci.appveyor.com/project/aerispaha/swmmio/branch/master) [![Build Status](https://travis-ci.com/aerispaha/swmmio.svg?branch=master)](https://travis-ci.com/aerispaha/swmmio) diff --git a/README.rst b/README.rst deleted file mode 100644 index 1ee2d54..0000000 --- a/README.rst +++ /dev/null @@ -1,209 +0,0 @@ -swmmio Documentation -==================== - -|Build status| - -SWMMIO is a set of python tools aiming to provide a means -for version control and visualizing results from the EPA Stormwater -Management Model (SWMM). Command line tools are also provided for -running models individually and in parallel via Python’s -``multiprocessing`` module. These tools are being developed specifically -for the application of flood risk management, though most functionality -is applicable to SWMM modeling in general. - -|Kool Picture| - -Prerequisites -~~~~~~~~~~~~~ - -SWMMIO functions primarily by interfacing with .inp and .rpt (input and -report) files produced by SWMM. Functions within the ``run_models`` -module rely on a SWMM5 engine which can be downloaded `here`_. - -Dependencies -~~~~~~~~~~~~ - -- `pillow`_ -- `matplotlib`_ -- `numpy`_ -- `pandas`_ -- `pyshp`_ - -Installation: -~~~~~~~~~~~~~ - -Before installation, it’s recommended to first activate a `virtualenv`_ -to not crowd your system’s package library. If you don’t use any of the -dependencies listed above, this step is less important. SWMMIO can be -installed via pip in your command line: - -.. code:: bash - - pip install swmmio - -Basic Usage -~~~~~~~~~~~ - -The ``swmmio.Model()`` class provides the basic endpoint for interfacing -with SWMM models. To get started, save a SWMM5 model (.inp) in a -directory with its report file (.rpt). A few examples: - -.. code:: python - - import swmmio - - #instantiate a swmmio model object - mymodel = swmmio.Model('/path/to/directory with swmm files') - - #Pandas dataframe with most useful data related to model nodes, conduits, and subcatchments - nodes = mymodel.nodes() - conduits = mymodel.conduits() - subs = mymodel.subcatchments() - - #enjoy all the Pandas functions - nodes.head() - - -+--------------------------+------------+-----------+----------------+------------+----------+----------+--------------+--------+--------------+-------------+--------------+-------+--------------+-------------+---------------+------------------+-----------+------------+---------------------------+ -| Name | InvertElev | MaxDepth | SurchargeDepth | PondedArea | Type | AvgDepth | MaxNodeDepth | MaxHGL | MaxDay_depth | MaxHr_depth | HoursFlooded | MaxQ | MaxDay_flood | MaxHr_flood | TotalFloodVol | MaximumPondDepth | X | Y | coords | -+--------------------------+------------+-----------+----------------+------------+----------+----------+--------------+--------+--------------+-------------+--------------+-------+--------------+-------------+---------------+------------------+-----------+------------+---------------------------+ -| S42A_10.N_4 | 13.506673 | 6.326977 | 5.0 | 110.0 | JUNCTION | 0.69 | 6.33 | 19.83 | 0 | 12:01 | 0.01 | 0.20 | 0.0 | 11:52 | 0.000 | 6.33 | 2689107.0 | 227816.000 | [(2689107.0, 227816.0)] | -+--------------------------+------------+-----------+----------------+------------+----------+----------+--------------+--------+--------------+-------------+--------------+-------+--------------+-------------+---------------+------------------+-----------+------------+---------------------------+ -| D70_ShunkStreet_Trunk_43 | 8.508413 | 2.493647 | 5.0 | 744.0 | JUNCTION | 0.04 | 0.23 | 8.74 | 0 | 12:14 | NaN | NaN | NaN | NaN | NaN | NaN | 2691329.5 | 223675.813 | [(2691329.5, 223675.813)] | -+--------------------------+------------+-----------+----------------+------------+----------+----------+--------------+--------+--------------+-------------+--------------+-------+--------------+-------------+---------------+------------------+-----------+------------+---------------------------+ -| TD61_1_2_90 | 5.150000 | 15.398008 | 0.0 | 0.0 | JUNCTION | 0.68 | 15.40 | 20.55 | 0 | 11:55 | 0.01 | 19.17 | 0.0 | 11:56 | 0.000 | 15.40 | 2698463.5 | 230905.720 | [(2698463.5, 230905.72)] | -+--------------------------+------------+-----------+----------------+------------+----------+----------+--------------+--------+--------------+-------------+--------------+-------+--------------+-------------+---------------+------------------+-----------+------------+---------------------------+ -| D66_36.D.7.C.1_19 | 19.320000 | 3.335760 | 5.0 | 6028.0 | JUNCTION | 0.57 | 3.38 | 22.70 | 0 | 12:00 | 0.49 | 6.45 | 0.0 | 11:51 | 0.008 | 3.38 | 2691999.0 | 230309.563 | [(2691999.0, 230309.563)] | -+--------------------------+------------+-----------+----------------+------------+----------+----------+--------------+--------+--------------+-------------+--------------+-------+--------------+-------------+---------------+------------------+-----------+------------+---------------------------+ - -.. code:: python - - #write to a csv - nodes.to_csv('/path/mynodes.csv') - - #calculate average and weighted average impervious - avg_imperviousness = subs.PercImperv.mean() - weighted_avg_imp = (subs.Area * subs.PercImperv).sum() / len(subs) - -Generating Graphics -~~~~~~~~~~~~~~~~~~~ - -Create an image (.png) visualization of the model. By default, pipe -stress and node flood duration is visualized if your model includes -output data (a .rpt file should accompany the .inp). - -.. code:: python - - from swmmio.graphics import swmm_graphics as sg - sg.draw_model(mymodel) - -.. figure:: https://raw.githubusercontent.com/aerispaha/swmmio/master/docs/img/default_draw.png - :alt: Sewer Stress, Node Flooding - - Default Draw Output - -Use pandas to calculate some interesting stats, and generate a image to -highlight what’s interesting or important for your project: - -.. code:: python - - #isolate nodes that have flooded for more than 30 minutes - flooded_series = nodes.loc[nodes.HoursFlooded>0.5, 'TotalFloodVol'] - flood_vol = sum(flooded_series) #total flood volume (million gallons) - flooded_count = len(flooded_series) #count of flooded nodes - - #highlight these nodes in a graphic - nodes['draw_color'] = '#787882' #grey, default node color - nodes.loc[nodes.HoursFlooded>0.5, 'draw_color'] = '#751167' #purple, flooded nodes - - #set the radius of flooded nodes as a function of HoursFlooded - nodes.loc[nodes.HoursFlooded>1, 'draw_size'] = nodes.loc[nodes.HoursFlooded>1, 'HoursFlooded'] * 12 - - #make the conduits grey, sized as function of their geometry - conds['draw_color'] = '#787882' - conds['draw_size'] = conds.Geom1 - - #add an informative annotation, and draw: - annotation = 'Flooded Volume: {}MG\nFlooded Nodes:{}'.format(round(flood_vol), flooded_count) - sg.draw_model(mymodel, annotation=annotation, file_path='flooded_anno_example.png') - -.. figure:: https://raw.githubusercontent.com/aerispaha/swmmio/master/docs/img/flooded_anno_example.png - :alt: Node Flooding with annotation - - Flooded highlight - -Building Variations of Models -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Starting with a base SWMM model, other models can be created by -inserting altered data into a new inp file. Useful for sensitivity -analysis or varying boundary conditions, models can be created using a -fairly simple loop, leveraging the ``modify_model`` package. - -For example, climate change impacts can be investigated by creating a -set of models with varying outfall Fixed Stage elevations: - -.. code:: python - - import os, shutil - import swmmio - from swmmio.utils.modify_model import replace_inp_section - from swmmio.utils.dataframes import create_dataframeINP - - #initialize a baseline model object - baseline = swmmio.Model(r'path\to\baseline.inp') - rise = 0.0 #set the starting sea level rise condition - - #create models up to 5ft of sea level rise. - while rise <= 5: - - #create a dataframe of the model's outfalls - outfalls = create_dataframeINP(baseline.inp.path, '[OUTFALLS]') - - #create the Pandas logic to access the StageOrTimeseries column of FIXED outfalls - slice_condition = outfalls.OutfallType == 'FIXED', 'StageOrTimeseries' - - #add the current rise to the outfalls' stage elevation - outfalls.loc[slice_condition] = pd.to_numeric(outfalls.loc[slice_condition]) + rise - - #copy the base model into a new directory - newdir = os.path.join(baseline.inp.dir, str(rise)) - os.mkdir(newdir) - newfilepath = os.path.join(newdir, baseline.inp.name + "_" + str(rise) + '_SLR.inp') - shutil.copyfile(baseline.inp.path, newfilepath) - - #Overwrite the OUTFALLS section of the new model with the adjusted data - replace_inp_section(newfilepath, '[OUTFALLS]', outfalls) - - #increase sea level rise for the next loop - rise += 0.25 - -Access Model Network -~~~~~~~~~~~~~~~~~~~~ - -The ``swmmio.Model`` class returns a Networkx MultiDiGraph -representation of the model via that ``network`` parameter: - -.. code:: python - - - #access the model as a Networkx MutliDiGraph - G = model.network - - #iterate through links - for u, v, key, data in model.network.edges(data=True, keys=True): - - print (key, data['Geom1']) - # do stuff with the network - -.. _here: https://www.epa.gov/water-research/storm-water-management-model-swmm -.. _pillow: http://python-pillow.org/ -.. _matplotlib: http://matplotlib.org/ -.. _numpy: http://www.numpy.org/ -.. _pandas: https://github.com/pydata/pandas -.. _pyshp: https://github.com/GeospatialPython/pyshp -.. _virtualenv: https://github.com/pypa/virtualenv - -.. |Build status| image:: https://ci.appveyor.com/api/projects/status/qywujm5w2wm0y2tv?svg=true - :target: https://ci.appveyor.com/project/aerispaha/swmmio -.. |Kool Picture| image:: https://raw.githubusercontent.com/aerispaha/swmmio/master/docs/img/impact_of_option.png diff --git a/swmmio/__init__.py b/swmmio/__init__.py index 16eca70..5ed5cc9 100644 --- a/swmmio/__init__.py +++ b/swmmio/__init__.py @@ -5,7 +5,7 @@ '''Python SWMM Input/Output Tools''' -VERSION_INFO = (0, 3, 5) +VERSION_INFO = (0, 3, 5, 'dev1') __version__ = '.'.join(map(str, VERSION_INFO)) __author__ = 'Adam Erispaha' __copyright__ = 'Copyright (c) 2016' diff --git a/swmmio/damage/__init__.py b/swmmio/damage/__init__.py index 361f8b4..716be70 100644 --- a/swmmio/damage/__init__.py +++ b/swmmio/damage/__init__.py @@ -1,4 +1,4 @@ -from swmmio.graphics.constants import * +from swmmio.defs.constants import * FLOOD_IMPACT_CATEGORIES = { 'increased_flooding':{ diff --git a/swmmio/defs/config.py b/swmmio/defs/config.py index 2e5870d..637a8d8 100644 --- a/swmmio/defs/config.py +++ b/swmmio/defs/config.py @@ -1,26 +1,27 @@ import os # This is the swmmio project root -ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) +ROOT_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) -#path to the SWMM5 executable used within the run_models module +# path to the SWMM5 executable used within the run_models module if os.name == 'posix': SWMM_ENGINE_PATH = os.path.join(ROOT_DIR, 'lib', 'linux', 'swmm5') else: SWMM_ENGINE_PATH = os.path.join(ROOT_DIR, 'lib', 'windows', 'swmm5_22.exe') -#path to the default geodatabase. used for some arcpy functions and parcel calcs -GEODATABASE = r'C:\Data\ArcGIS\GDBs\LocalData.gdb' - -#feature class name of parcels in geodatabase +# feature class name of parcels in geodatabase PARCEL_FEATURES = r'PWD_PARCELS_SHEDS_PPORT' -#name of the directories in which to store post processing data +# name of the directories in which to store post processing data REPORT_DIR_NAME = r'Report' -#path to the basemap file used to create custom basemaps -BASEMAP_PATH = os.path.join(ROOT_DIR,'swmmio','reporting','basemaps','index.html') -BETTER_BASEMAP_PATH = os.path.join(ROOT_DIR,'swmmio','reporting','basemaps','mapbox_base.html') +# path to the basemap file used to create custom basemaps +FONT_PATH = os.path.join(ROOT_DIR, 'swmmio', 'graphics', 'fonts', 'Verdana.ttf') + +# path to the default geodatabase. used for some arcpy functions and parcel calcs +GEODATABASE = r'C:\Data\ArcGIS\GDBs\LocalData.gdb' + +# path to the basemap file used to create custom basemaps +BASEMAP_PATH = os.path.join(ROOT_DIR, 'swmmio', 'reporting', 'basemaps', 'index.html') +BETTER_BASEMAP_PATH = os.path.join(ROOT_DIR, 'swmmio', 'reporting', 'basemaps', 'mapbox_base.html') -#path to the basemap file used to create custom basemaps -FONT_PATH = os.path.join(ROOT_DIR,'swmmio','graphics','fonts','Verdana.ttf') diff --git a/swmmio/graphics/constants.py b/swmmio/defs/constants.py similarity index 100% rename from swmmio/graphics/constants.py rename to swmmio/defs/constants.py diff --git a/swmmio/graphics/__init__.py b/swmmio/graphics/__init__.py index b043d01..331013b 100644 --- a/swmmio/graphics/__init__.py +++ b/swmmio/graphics/__init__.py @@ -1,4 +1,6 @@ from swmmio.defs.config import FONT_PATH +from swmmio.defs.constants import * + class _dotdict(dict): """dot.notation access to dictionary attributes""" @@ -6,6 +8,7 @@ class _dotdict(dict): __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ + """ this allows module wide configuration of drawing methods. The dotdict allows for convenient access. @@ -15,24 +18,46 @@ class _dotdict(dict): from swmmio.graphics import swmm_graphics as sg #configure - graphics.options.inlcude_parcels = True + graphics.config.inlcude_parcels = True #draws the model with parcels sg.drawModel(swmmio.Model(/path/to/model), bbox=su.d68d70) """ _o = { - 'include_basemap':False, - 'include_parcels':False, - 'basemap_shapefile_dir':r'C:\Data\ArcGIS\Shapefiles', - - #regular shapefile used for drawing parcels - 'parcels_shapefile':r'C:\Data\ArcGIS\Shapefiles\pennsport_parcels.shp', - - #table resulting from one-to-many spatial join of parcels to sheds - 'parcel_node_join_data':r'P:\02_Projects\SouthPhila\SE_SFR\MasterModels\CommonData\pennsport_sheds_parcels_join.csv', - - 'font_file':FONT_PATH,#r'C:\Data\Code\Fonts\Raleway-Regular.ttf', + 'include_basemap': False, + 'include_parcels': False, + 'basemap_shapefile_dir': r'C:\Data\ArcGIS\Shapefiles', + + # regular shapefile used for drawing parcels + 'parcels_shapefile': r'C:\Data\ArcGIS\Shapefiles\pennsport_parcels.shp', + + # table resulting from one-to-many spatial join of parcels to sheds + 'parcel_node_join_data': r'P:\02_Projects\SouthPhila\SE_SFR\MasterModels\CommonData\pennsport_sheds_parcels_join.csv', + + 'font_file': FONT_PATH, + 'basemap_options': { + 'gdb': r'C:\Data\ArcGIS\GDBs\LocalData.gdb', + 'features': [ + # this is an array so we can control the order of basemap layers + { + 'feature': 'PhiladelphiaParks', + 'fill': park_green, + 'cols': ["OBJECTID"] # , "SHAPE@"] + }, + { + 'feature': 'HydroPolyTrim', + 'fill': water_grey, + 'cols': ["OBJECTID"] # , "SHAPE@"] + }, + { + 'feature': 'Streets_Dissolved5_SPhilly', + 'fill': lightgrey, + 'fill_anno': grey, + 'cols': ["OBJECTID", "ST_NAME"] # "SHAPE@", + } + ], +} } config = _dotdict(_o) diff --git a/swmmio/graphics/drawing.py b/swmmio/graphics/drawing.py index b014a71..69b9ec0 100644 --- a/swmmio/graphics/drawing.py +++ b/swmmio/graphics/drawing.py @@ -1,6 +1,4 @@ -import math -from swmmio.graphics.constants import * #constants -from swmmio.graphics import config, options +from swmmio.defs.constants import * from swmmio.graphics.utils import * from PIL import Image, ImageDraw, ImageFont, ImageOps from time import strftime @@ -9,275 +7,227 @@ # FUNCTIONS FOR COMPUTING THE VISUAL CHARACTERISTICS OF MODEL ELEMENTS def node_draw_size(node): - """given a row of a nodes() dataframe, return the size it should be drawn""" + """given a row of a nodes() dataframe, return the size it should be drawn""" - if 'draw_size' in node.axes[0]: - #if this value has already been calculated - return node.draw_size + if 'draw_size' in node.axes[0]: + # if this value has already been calculated + return node.draw_size + + radius = 0 # aka don't show this node by default + if 'HoursFlooded' in node and node.HoursFlooded >= 0.083: + radius = node.HoursFlooded * 3 + return radius - radius = 0 #aka don't show this node by default - if 'HoursFlooded' in node and node.HoursFlooded >= 0.083: - radius = node.HoursFlooded*3 - return radius def node_draw_color(node): - """given a row of a nodes() dataframe, return the color it should be drawn""" + """given a row of a nodes() dataframe, return the color it should be drawn""" + + if 'draw_color' in node.axes[0]: + # if this value has already been calculated + return node.draw_color - if 'draw_color' in node.axes[0]: - #if this value has already been calculated - return node.draw_color + color = '#d2d2e6' # (210, 210, 230) #default color + if 'HoursFlooded' in node and node.HoursFlooded >= 0.083: + color = red + return color - color = '#d2d2e6'#(210, 210, 230) #default color - if 'HoursFlooded' in node and node.HoursFlooded >= 0.083: - color = red - return color def conduit_draw_size(conduit): - """return the draw size of a conduit""" + """return the draw size of a conduit""" - if 'draw_size' in conduit.axes[0]: - #if this value has already been calculated - return conduit.draw_size + if 'draw_size' in conduit.axes[0]: + # if this value has already been calculated + return conduit.draw_size - draw_size = 1 - if 'MaxQPerc' in conduit and conduit.MaxQPerc >= 1: - capacity = conduit.MaxQ / conduit.MaxQPerc - stress = conduit.MaxQ / capacity - fill = gradient_grey_red(conduit.MaxQ*100, 0, capacity*300) - draw_size = int(round(math.pow(stress*10, 0.8))) + draw_size = 1 + if 'MaxQPerc' in conduit and conduit.MaxQPerc >= 1: + capacity = conduit.MaxQ / conduit.MaxQPerc + stress = conduit.MaxQ / capacity + fill = gradient_grey_red(conduit.MaxQ * 100, 0, capacity * 300) + draw_size = int(round(math.pow(stress * 10, 0.8))) - elif 'Geom1' in conduit: - draw_size = conduit.Geom1 + elif 'Geom1' in conduit: + draw_size = conduit.Geom1 + + return draw_size - return draw_size def conduit_draw_color(conduit): - """return the draw color of a conduit""" + """return the draw color of a conduit""" + + if 'draw_color' in conduit.axes[0]: + # if this value has already been calculated + return conduit.draw_color - if 'draw_color' in conduit.axes[0]: - #if this value has already been calculated - return conduit.draw_color + fill = '#787882' # (120, 120, 130) + if 'MaxQPerc' in conduit and conduit.MaxQPerc >= 1: + capacity = conduit.MaxQ / conduit.MaxQPerc + stress = conduit.MaxQ / capacity + fill = gradient_grey_red(conduit.MaxQ * 100, 0, capacity * 300) + return fill - fill = '#787882' #(120, 120, 130) - if 'MaxQPerc' in conduit and conduit.MaxQPerc >= 1: - capacity = conduit.MaxQ / conduit.MaxQPerc - stress = conduit.MaxQ / capacity - fill = gradient_grey_red(conduit.MaxQ*100, 0, capacity*300) - return fill def parcel_draw_color(parcel, style='risk'): - if style == 'risk': - fill = gradient_color_red(parcel.HoursFlooded + 0.5, 0, 3) - if style == 'delta': - fill = lightgrey #default - if parcel.Category == 'increased_flooding': - #parcel previously flooded, now floods more - fill = red + if style == 'risk': + fill = gradient_color_red(parcel.HoursFlooded + 0.5, 0, 3) + if style == 'delta': + fill = lightgrey # default + if parcel.Category == 'increased_flooding': + # parcel previously flooded, now floods more + fill = red - if parcel.Category == 'new_flooding': - #parcel previously did not flood, now floods in proposed conditions - fill = purple + if parcel.Category == 'new_flooding': + # parcel previously did not flood, now floods in proposed conditions + fill = purple - if parcel.Category == 'decreased_flooding': - #parcel flooding problem decreased - fill = lightblue #du.lightgrey + if parcel.Category == 'decreased_flooding': + # parcel flooding problem decreased + fill = lightblue # du.lightgrey - if parcel.Category == 'eliminated_flooding': - #parcel flooding problem eliminated - fill = lightgreen + if parcel.Category == 'eliminated_flooding': + # parcel flooding problem eliminated + fill = lightgreen + + return fill - return fill # PIL DRAW METHODS APPLIED TO ImageDraw OBJECTS def draw_node(node, draw): - """draw a node to the given PIL ImageDraw object""" - color = node_draw_color(node) - radius = node_draw_size(node) - draw.ellipse(circle_bbox(node.draw_coords[0], radius), fill = color) + """draw a node to the given PIL ImageDraw object""" + color = node_draw_color(node) + radius = node_draw_size(node) + draw.ellipse(circle_bbox(node.draw_coords[0], radius), fill=color) + def draw_conduit(conduit, draw): - #default fill and size + # default fill and size fill = conduit_draw_color(conduit) draw_size = int(conduit_draw_size(conduit)) xys = conduit.draw_coords - #draw that thing - draw.line(xys, fill = fill, width = draw_size) - if length_bw_coords(xys[0], xys[-1]) > draw_size*0.75: - #if length is long enough, add circles on the ends to smooth em out - #this check avoids circles being drawn for tiny pipe segs - draw.ellipse(circle_bbox(xys[0], draw_size*0.5), fill = fill) - draw.ellipse(circle_bbox(xys[1], draw_size*0.5), fill = fill) + # draw that thing + draw.line(xys, fill=fill, width=draw_size) + if length_bw_coords(xys[0], xys[-1]) > draw_size * 0.75: + # if length is long enough, add circles on the ends to smooth em out + # this check avoids circles being drawn for tiny pipe segs + draw.ellipse(circle_bbox(xys[0], draw_size * 0.5), fill=fill) + draw.ellipse(circle_bbox(xys[1], draw_size * 0.5), fill=fill) + def draw_parcel_risk(parcel, draw): - fill = gradient_color_red(parcel.HoursFlooded + 0.5, 0, 3) - draw.polygon(parcel.draw_coords, fill=fill) + fill = gradient_color_red(parcel.HoursFlooded + 0.5, 0, 3) + draw.polygon(parcel.draw_coords, fill=fill) + def draw_parcel_risk_delta(parcel, draw): - if parcel.Category == 'increased_flooding': - #parcel previously flooded, now floods more - fill = red + if parcel.Category == 'increased_flooding': + # parcel previously flooded, now floods more + fill = red - if parcel.Category == 'new_flooding': - #parcel previously did not flood, now floods in proposed conditions - fill = purple + if parcel.Category == 'new_flooding': + # parcel previously did not flood, now floods in proposed conditions + fill = purple - if parcel.Category == 'decreased_flooding': - #parcel flooding problem decreased - fill = lightblue #du.lightgrey + if parcel.Category == 'decreased_flooding': + # parcel flooding problem decreased + fill = lightblue # du.lightgrey - if parcel.Category == 'eliminated_flooding': - #parcel flooding problem eliminated - fill = lightgreen + if parcel.Category == 'eliminated_flooding': + # parcel flooding problem eliminated + fill = lightgreen + + draw.polygon(parcel.draw_coords, fill=fill) - draw.polygon(parcel.draw_coords, fill=fill) def annotate_streets(df, img, text_col): + # confirm font file location + if not os.path.exists(FONT_PATH): + print('Error loading default font. Check your FONT_PATH') + return None + + unique_sts = df[text_col].unique() + for street in unique_sts: + draw_coords = df.loc[df.ST_NAME == street, 'draw_coords'].tolist()[0] + coords = df.loc[df.ST_NAME == street, 'coords'].tolist()[0] + font = ImageFont.truetype(FONT_PATH, int(25)) + imgTxt = Image.new('L', font.getsize(street)) + drawTxt = ImageDraw.Draw(imgTxt) + drawTxt.text((0, 0), street, font=font, fill=(10, 10, 12)) + angle = angle_bw_points(coords[0], coords[1]) + texrot = imgTxt.rotate(angle, expand=1) + mpt = midpoint(draw_coords[0], draw_coords[1]) + img.paste(ImageOps.colorize(texrot, (0, 0, 0), (10, 10, 12)), mpt, texrot) - #confirm font file location - if not os.path.exists(config.font_file): - print('Error loading defautl font. Check your config.font_file') - return None - - unique_sts = df[text_col].unique() - for street in unique_sts: - draw_coords = df.loc[df.ST_NAME==street, 'draw_coords'].tolist()[0] - coords = df.loc[df.ST_NAME==street, 'coords'].tolist()[0] - font = ImageFont.truetype(config.font_file, int(25)) - imgTxt = Image.new('L', font.getsize(street)) - drawTxt = ImageDraw.Draw(imgTxt) - drawTxt.text((0,0), street, font=font, fill=(10,10,12)) - angle = angle_bw_points(coords[0], coords[1]) - texrot = imgTxt.rotate(angle, expand=1) - mpt = midpoint(draw_coords[0], draw_coords[1]) - img.paste(ImageOps.colorize(texrot, (0,0,0), (10,10,12)), mpt, texrot) def gradient_grey_red(x, xmin, xmax): + range = xmax - xmin - range = xmax - xmin - - rMin = 100 - bgMax = 100 - rScale = (255 - rMin) / range - bgScale = (bgMax) / range - x = min(x, xmax) #limit any vals to the prescribed max + rMin = 100 + bgMax = 100 + rScale = (255 - rMin) / range + bgScale = (bgMax) / range + x = min(x, xmax) # limit any vals to the prescribed max + # print "range = " + str(range) + # print "scale = " + str(scale) + r = int(round(x * rScale + rMin)) + g = int(round(bgMax - x * bgScale)) + b = int(round(bgMax - x * bgScale)) - #print "range = " + str(range) - #print "scale = " + str(scale) - r = int(round(x*rScale + rMin )) - g = int(round(bgMax - x*bgScale)) - b = int(round(bgMax - x*bgScale)) + return (r, g, b) - return (r, g, b) def line_size(q, exp=1): - return int(round(math.pow(q, exp))) + return int(round(math.pow(q, exp))) -def gradient_color_red(x, xmin, xmax, startCol=lightgrey): - range = xmax - xmin +def gradient_color_red(x, xmin, xmax, startCol=lightgrey): + range = xmax - xmin - rMin = startCol[0] - gMax = startCol[1] - bMax = startCol[2] + rMin = startCol[0] + gMax = startCol[1] + bMax = startCol[2] - rScale = (255 - rMin) / range - gScale = (gMax) / range - bScale = (bMax) / range - x = min(x, xmax) #limit any vals to the prescribed max + rScale = (255 - rMin) / range + gScale = (gMax) / range + bScale = (bMax) / range + x = min(x, xmax) # limit any vals to the prescribed max + # print "range = " + str(range) + # print "scale = " + str(scale) + r = int(round(x * rScale + rMin)) + g = int(round(gMax - x * gScale)) + b = int(round(bMax - x * bScale)) - #print "range = " + str(range) - #print "scale = " + str(scale) - r = int(round(x*rScale + rMin )) - g = int(round(gMax - x*gScale)) - b = int(round(bMax - x*bScale)) + return (r, g, b) - return (r, g, b) def annotate_title(title, draw): - size = (draw.im.getbbox()[2], draw.im.getbbox()[3]) - scale = 1 * size[0] / 2048 - fnt = ImageFont.truetype(config.font_file, int(40 *scale)) - draw.text((10, 15), title, fill=black, font=fnt) - -def annotate_timestamp(draw): - size = (draw.im.getbbox()[2], draw.im.getbbox()[3]) - scale = 1 * size[0] / 2048 - fnt = ImageFont.truetype(config.font_file, int(20 *scale)) - - timestamp = strftime("%b-%d-%Y %H:%M:%S") - txt_height = draw.textsize(timestamp, fnt)[1] - txt_width = draw.textsize(timestamp, fnt)[0] - xy = (size[0] - txt_width - 10, 15) - draw.text(xy, timestamp, fill=grey, font=fnt) + size = (draw.im.getbbox()[2], draw.im.getbbox()[3]) + scale = 1 * size[0] / 2048 + fnt = ImageFont.truetype(FONT_PATH, int(40 * scale)) + draw.text((10, 15), title, fill=black, font=fnt) -def annotate_details(txt, draw): - size = (draw.im.getbbox()[2], draw.im.getbbox()[3]) - scale = 1 * size[0] / 2048 - fnt = ImageFont.truetype(config.font_file, int(20 *scale)) - - txt_height = draw.textsize(txt, fnt)[1] - - draw.text((10, size[1] - txt_height - 10), - txt, fill=black, font=fnt) - -#LEGACY CODE !!!!!! -def _annotateMap (canvas, model, model2=None, currentTstr = None, options=None, results={}): - - #unpack the options - nodeSymb = options['nodeSymb'] - conduitSymb = options['conduitSymb'] - basemap = options['basemap'] - parcelSymb = options['parcelSymb'] - traceUpNodes = options['traceUpNodes'] - traceDnNodes = options['traceDnNodes'] - - modelSize = (canvas.im.getbbox()[2], canvas.im.getbbox()[3]) - - #define main fonts - fScale = 1 * modelSize[0] / 2048 - titleFont = ImageFont.truetype(fontFile, int(40 * fScale)) - font = ImageFont.truetype(fontFile, int(20 * fScale)) - - #Buid the title and files list (handle 1 or two input models) - #this is hideous, or elegant? - files = title = results_string = symbology_string = annotationTxt = "" - files = '\n'.join([m.rpt.path for m in [_f for _f in [model, model2] if _f]]) - title = ' to '.join([m.inp.name for m in [_f for _f in [model, model2] if _f]]) - symbology_string = ', '.join([s['title'] for s in [_f for _f in [nodeSymb, conduitSymb, parcelSymb] if _f]]) - title += "\n" + symbology_string - - #collect results - for result, value in results.items(): - results_string += '\n' + result + ": " + str(value) - - #compile the annotation text - if results: - annotationTxt = results_string + "\n" - annotationTxt += files - - - annoHeight = canvas.textsize(annotationTxt, font)[1] - - canvas.text((10, 15), title, fill=black, font=titleFont) - canvas.text((10, modelSize[1] - annoHeight - 10), annotationTxt, fill=black, font=font) +def annotate_timestamp(draw): + size = (draw.im.getbbox()[2], draw.im.getbbox()[3]) + scale = 1 * size[0] / 2048 + fnt = ImageFont.truetype(FONT_PATH, int(20 * scale)) - if currentTstr: - #timestamp in lower right corner - annoHeight = canvas.textsize(currentTstr, font)[1] - annoWidth = canvas.textsize(currentTstr, font)[0] - canvas.text((modelSize[0] - annoWidth - 10, modelSize[1] - annoHeight - 10), currentTstr, fill=black, font=font) + timestamp = strftime("%b-%d-%Y %H:%M:%S") + txt_height = draw.textsize(timestamp, fnt)[1] + txt_width = draw.textsize(timestamp, fnt)[0] + xy = (size[0] - txt_width - 10, 15) + draw.text(xy, timestamp, fill=grey, font=fnt) - #add postprocessing timestamp - timestamp = strftime("%b-%d-%Y %H:%M:%S") - annoHeight = canvas.textsize(timestamp, font)[1] - annoWidth = canvas.textsize(timestamp, font)[0] - canvas.text((modelSize[0] - annoWidth - 10, annoHeight - 5), timestamp, fill=du.grey, font=font) +def annotate_details(txt, draw): + size = (draw.im.getbbox()[2], draw.im.getbbox()[3]) + scale = 1 * size[0] / 2048 + fnt = ImageFont.truetype(FONT_PATH, int(20 * scale)) + txt_height = draw.textsize(txt, fnt)[1] + draw.text((10, size[1] - txt_height - 10), + txt, fill=black, font=fnt) -#end diff --git a/swmmio/graphics/options.py b/swmmio/graphics/options.py deleted file mode 100644 index 4d01c3a..0000000 --- a/swmmio/graphics/options.py +++ /dev/null @@ -1,26 +0,0 @@ -from swmmio.defs.config import PARCEL_FEATURES, GEODATABASE -from constants import * - -font_file = r"C:\Data\Code\Fonts\Raleway-Regular.ttf" -basemap_options = { -'gdb': GEODATABASE, -'features': [ - #this is an array so we can control the order of basemap layers - { - 'feature': 'PhiladelphiaParks', - 'fill': park_green, - 'cols': ["OBJECTID"]#, "SHAPE@"] - }, - { - 'feature': 'HydroPolyTrim', - 'fill':water_grey, - 'cols': ["OBJECTID"]#, "SHAPE@"] - }, - { - 'feature': 'Streets_Dissolved5_SPhilly', - 'fill': lightgrey, - 'fill_anno': grey, - 'cols': ["OBJECTID", "ST_NAME"] #"SHAPE@", - } - ], -} diff --git a/swmmio/graphics/swmm_graphics.py b/swmmio/graphics/swmm_graphics.py index e1d8ab5..883073c 100644 --- a/swmmio/graphics/swmm_graphics.py +++ b/swmmio/graphics/swmm_graphics.py @@ -1,119 +1,107 @@ -#graphical functions for SWMM files -from swmmio.defs.config import * -from swmmio.damage import parcels as pdamage -from swmmio.graphics import config, options -from swmmio.graphics.constants import * #constants -from swmmio.graphics.utils import * +# graphical functions for SWMM files +# from swmmio.graphics import config, options +# from swmmio.graphics.constants import * #constants +# from swmmio.graphics.utils import * from swmmio.graphics.drawing import * from swmmio.utils import spatial -import pandas as pd + import os from PIL import Image, ImageDraw - def _draw_basemap(draw, img, bbox, px_width, shift_ratio): + """ + given the shapefiles in config.basemap_options, render each layer + on the model basemap. + """ - """ - given the shapefiles in options.basemap_options, render each layer - on the model basemap. - """ + for f in config.basemap_options['features']: - for f in options.basemap_options['features']: + shp_path = os.path.join(config.basemap_shapefile_dir, f['feature']) + df = spatial.read_shapefile(shp_path)[f['cols'] + ['coords']] + df = px_to_irl_coords(df, bbox=bbox, shift_ratio=shift_ratio, + px_width=px_width)[0] - shp_path = os.path.join(config.basemap_shapefile_dir, f['feature']) - df = spatial.read_shapefile(shp_path)[f['cols']+['coords']] - df = px_to_irl_coords(df, bbox=bbox, shift_ratio=shift_ratio, - px_width=px_width)[0] - - if 'ST_NAME' in df.columns: - #this is a street, draw a polyline accordingly - df.apply(lambda r: draw.line(r.draw_coords, fill=f['fill']), axis=1) - annotate_streets(df, img, 'ST_NAME') - else: - df.apply(lambda r: draw.polygon(r.draw_coords, - fill=f['fill']), axis=1) + if 'ST_NAME' in df.columns: + # this is a street, draw a polyline accordingly + df.apply(lambda r: draw.line(r.draw_coords, fill=f['fill']), axis=1) + annotate_streets(df, img, 'ST_NAME') + else: + df.apply(lambda r: draw.polygon(r.draw_coords, + fill=f['fill']), axis=1) def draw_model(model=None, nodes=None, conduits=None, parcels=None, title=None, - annotation=None, file_path=None, bbox=None, px_width=2048.0): - """ - create a png rendering of the model and model results. - - A swmmio.Model object can be passed in independently, or Pandas Dataframes - for the nodes and conduits of a model may be passed in. A dataframe containing - parcel data can optionally be passed in. + annotation=None, file_path=None, bbox=None, px_width=2048.0): + """create a png rendering of the model and model results. - model -> swmmio.Model object + A swmmio.Model object can be passed in independently, or Pandas Dataframes + for the nodes and conduits of a model may be passed in. A dataframe containing + parcel data can optionally be passed in. - nodes -> Pandas Dataframe (optional, if model not provided) + model -> swmmio.Model object - conduits -> Pandas Dataframe (optional, if model not provided) + nodes -> Pandas Dataframe (optional, if model not provided) - parcels - > Pandas Dataframe (optional) + conduits -> Pandas Dataframe (optional, if model not provided) - title -> string, to be written in top left of PNG + parcels - > Pandas Dataframe (optional) - annotation -> string, to be written in bottom left of PNG + title -> string, to be written in top left of PNG - file_path -> stirng, file path where png should be drawn. if not specified, - a PIL Image object is return (nice for IPython notebooks) + annotation -> string, to be written in bottom left of PNG - bbox -> tuple of coordinates representing bottom left and top right corner - of a bounding box. the rendering will be clipped to this box. If not - provided, the rendering will clip tightly to the model extents - e.g. bbox = ((2691647, 221073), (2702592, 227171)) + file_path -> stirng, file path where png should be drawn. if not specified, + a PIL Image object is return (nice for IPython notebooks) - Note: this hasn't been tested with anything other than PA StatePlane coords + bbox -> tuple of coordinates representing bottom left and top right corner + of a bounding box. the rendering will be clipped to this box. If not + provided, the rendering will clip tightly to the model extents + e.g. bbox = ((2691647, 221073), (2702592, 227171)) - px_width -> float, width of image in pixels - """ + Note: this hasn't been tested with anything other than PA StatePlane coords - #gather the nodes and conduits data if a swmmio Model object was passed in - if model is not None: - nodes = model.nodes() - conduits = model.conduits() + px_width -> float, width of image in pixels + """ - #antialias X2 - xplier=1 - xplier *= px_width/1024 #scale the symbology sizes - px_width = px_width*2 + # gather the nodes and conduits data if a swmmio Model object was passed in + if model is not None: + nodes = model.nodes() + conduits = model.conduits() - #compute draw coordinates, and the image dimensions (in px) - conduits, bb, h, w, shift_ratio = px_to_irl_coords(conduits, bbox=bbox, px_width=px_width) - nodes = px_to_irl_coords(nodes, bbox=bb, px_width=px_width)[0] + # antialias X2 + xplier = 1 + xplier *= px_width / 1024 # scale the symbology sizes + px_width = px_width * 2 - #create the PIL image and draw objects - img = Image.new('RGB', (w,h), white) - draw = ImageDraw.Draw(img) + # compute draw coordinates, and the image dimensions (in px) + conduits, bb, h, w, shift_ratio = px_to_irl_coords(conduits, bbox=bbox, px_width=px_width) + nodes = px_to_irl_coords(nodes, bbox=bb, px_width=px_width)[0] - #draw the basemap if required - if config.include_basemap is True: - _draw_basemap(draw, img, bb, px_width, shift_ratio) + # create the PIL image and draw objects + img = Image.new('RGB', (w, h), white) + draw = ImageDraw.Draw(img) - if parcels is not None: - #expects dataframe with coords and draw color column - par_px = px_to_irl_coords(parcels, bbox=bb, shift_ratio=shift_ratio, px_width=px_width)[0] - par_px.apply(lambda r: draw.polygon(r.draw_coords, fill=r.draw_color), axis=1) + # draw the basemap if required + if config.include_basemap is True: + _draw_basemap(draw, img, bb, px_width, shift_ratio) - # if config.include_parcels is True: - # par_flood = pdamage.flood_duration(nodes, parcel_node_join_csv=config.parcel_node_join_data) - # par_shp = spatial.read_shapefile(config.parcels_shapefile) - # par_px = px_to_irl_coords(par_shp, bbox=bb, shift_ratio=shift_ratio, px_width=px_width)[0] - # parcels = pd.merge(par_flood, par_px, right_on='PARCELID', left_index=True) - # parcels.apply(lambda row: draw_parcel_risk(row, draw), axis=1) + if parcels is not None: + # expects dataframe with coords and draw color column + par_px = px_to_irl_coords(parcels, bbox=bb, shift_ratio=shift_ratio, px_width=px_width)[0] + par_px.apply(lambda r: draw.polygon(r.draw_coords, fill=r.draw_color), axis=1) - #start the draw fest, mapping draw methods to each row in the dataframes - conduits.apply(lambda row: draw_conduit(row, draw), axis=1) - nodes.apply(lambda row: draw_node(row, draw), axis=1) + # start the draw fest, mapping draw methods to each row in the dataframes + conduits.apply(lambda row: draw_conduit(row, draw), axis=1) + nodes.apply(lambda row: draw_node(row, draw), axis=1) - #ADD ANNOTATION AS NECESSARY - if title: annotate_title(title, draw) - if annotation: annotate_details(annotation, draw) - annotate_timestamp(draw) + # ADD ANNOTATION AS NECESSARY + if title: annotate_title(title, draw) + if annotation: annotate_details(annotation, draw) + annotate_timestamp(draw) - #SAVE IMAGE TO DISK - if file_path: - save_image(img, file_path) + # SAVE IMAGE TO DISK + if file_path: + save_image(img, file_path) - return img + return img diff --git a/swmmio/reporting/reporting.py b/swmmio/reporting/reporting.py index 2918c8a..03ab43b 100644 --- a/swmmio/reporting/reporting.py +++ b/swmmio/reporting/reporting.py @@ -2,21 +2,17 @@ #such that standard reporting and figures can be generated to report on the #perfomance of given SFR alternatives/options from swmmio.reporting.functions import * -from swmmio.reporting.utils import insert_in_file, insert_in_file_2 from swmmio.damage import parcels from swmmio.graphics import swmm_graphics as sg from swmmio.graphics import drawing -from swmmio.graphics.constants import * -from swmmio.utils.dataframes import create_dataframeRPT from swmmio.utils import spatial from swmmio.version_control.inp import INPDiff from swmmio import swmmio -import os import math import pandas as pd from swmmio.defs.config import * -import json, geojson -import shutil +from swmmio.defs.constants import * +import geojson class FloodReport(object): diff --git a/swmmio/tests/data/__init__.py b/swmmio/tests/data/__init__.py index c8550be..c1d5cc2 100644 --- a/swmmio/tests/data/__init__.py +++ b/swmmio/tests/data/__init__.py @@ -27,6 +27,7 @@ MODEL_XSECTION_ALT_02 = os.path.join(DATA_PATH, 'alt_test2.inp') MODEL_XSECTION_ALT_03 = os.path.join(DATA_PATH, 'alt_test3.inp') MODEL_BLANK = os.path.join(DATA_PATH, 'blank_model.inp') +BUILD_INSTR_01 = os.path.join(DATA_PATH, 'test_build_instructions_01.txt') df_test_coordinates_csv = os.path.join(DATA_PATH, 'df_test_coordinates.csv') OUTFALLS_MODIFIED = os.path.join(DATA_PATH, 'outfalls_modified_10.csv') \ No newline at end of file diff --git a/swmmio/tests/data/test_build_instructions_01.txt b/swmmio/tests/data/test_build_instructions_01.txt new file mode 100644 index 0000000..6a95107 --- /dev/null +++ b/swmmio/tests/data/test_build_instructions_01.txt @@ -0,0 +1,24 @@ +{ + "Parent Models": { + "Baseline": { + "C:\\PROJECTCODE\\swmmio\\swmmio\\tests\\data\\baseline_test.inp": "18-10-12 18:11" + }, + "Alternatives": { + "C:\\PROJECTCODE\\swmmio\\swmmio\\tests\\data\\alt_test3.inp": "18-10-12 18:11" + } + }, + "Log": { + "test_version_id": "cool comments" + } +} +==================================================================================================== + +[JUNCTIONS] +;; InvertElev MaxDepth InitDepth SurchargeDepth PondedArea ; Comment Origin +dummy_node1 -15.0 30.0 0 0 0 ; Altered alt_test3.inp +dummy_node5 -6.96 15.0 0 0 73511 ; Altered alt_test3.inp + +[CONDUITS] +;; InletNode OutletNode Length ManningN InletOffset OutletOffset InitFlow MaxFlow ; Comment Origin +pipe5 dummy_node6 dummy_node5 666 0.013000000000000001 0 0 0 0 ; Altered alt_test3.inp + diff --git a/swmmio/tests/test_dataframes.py b/swmmio/tests/test_dataframes.py index 8061dd3..072a79a 100644 --- a/swmmio/tests/test_dataframes.py +++ b/swmmio/tests/test_dataframes.py @@ -1,9 +1,41 @@ from swmmio.tests.data import (MODEL_FULL_FEATURES_PATH, MODEL_FULL_FEATURES__NET_PATH, - MODEL_BROWARD_COUNTY_PATH, MODEL_XSECTION_ALT_01, df_test_coordinates_csv, - MODEL_FULL_FEATURES_XY) + BUILD_INSTR_01, MODEL_XSECTION_ALT_01, df_test_coordinates_csv, + MODEL_FULL_FEATURES_XY, DATA_PATH) import swmmio from swmmio import create_dataframeINP import pandas as pd +import pytest +import shutil +import os + +def makedirs(newdir): + """ + replicate this in Py2 campatible way + os.makedirs(temp_vc_dir_02, exist_ok=True) + """ + if os.path.exists(newdir): + shutil.rmtree(newdir) + os.makedirs(newdir) + + +def test_create_dataframeBI(): + + # m = swmmio.Model(MODEL_BROWARD_COUNTY_PATH) + bi_juncs = swmmio.create_dataframeBI(BUILD_INSTR_01, section='[JUNCTIONS]') + + assert bi_juncs.loc['dummy_node1', 'InvertElev'] == pytest.approx(-15, 0.01) + assert bi_juncs.loc['dummy_node5', 'InvertElev'] == pytest.approx(-6.96, 0.01) + + # test with spaces in path + temp_dir_01 = os.path.join(DATA_PATH, 'path with spaces') + makedirs(temp_dir_01) + shutil.copy(BUILD_INSTR_01, temp_dir_01) + BUILD_INSTR_01_spaces = os.path.join(temp_dir_01, BUILD_INSTR_01) + + bi_juncs = swmmio.create_dataframeBI(BUILD_INSTR_01_spaces, section='[JUNCTIONS]') + assert bi_juncs.loc['dummy_node1', 'InvertElev'] == pytest.approx(-15, 0.01) + assert bi_juncs.loc['dummy_node5', 'InvertElev'] == pytest.approx(-6.96, 0.01) + shutil.rmtree(temp_dir_01) def test_create_dataframeRPT(): diff --git a/swmmio/tests/test_graphics.py b/swmmio/tests/test_graphics.py new file mode 100644 index 0000000..feae1e3 --- /dev/null +++ b/swmmio/tests/test_graphics.py @@ -0,0 +1,14 @@ +from swmmio.tests.data import (DATA_PATH, MODEL_FULL_FEATURES_XY) +import swmmio +from swmmio.graphics import swmm_graphics as sg +import os + + +def test_draw_model(): + m = swmmio.Model(MODEL_FULL_FEATURES_XY) + target_img_pth = os.path.join(DATA_PATH, 'test-draw-model.png') + sg.draw_model(m, file_path=target_img_pth) + + assert os.path.exists(target_img_pth) + os.remove(target_img_pth) + diff --git a/swmmio/tests/test_version_control.py b/swmmio/tests/test_version_control.py index f79ab22..494c5da 100644 --- a/swmmio/tests/test_version_control.py +++ b/swmmio/tests/test_version_control.py @@ -1,10 +1,25 @@ -from swmmio.tests.data import (MODEL_XSECTION_BASELINE, MODEL_FULL_FEATURES_XY, - MODEL_XSECTION_ALT_02, MODEL_XSECTION_ALT_03, MODEL_BLANK, - OUTFALLS_MODIFIED) - +from swmmio import create_dataframeBI +from swmmio.tests.data import (DATA_PATH, MODEL_XSECTION_BASELINE, + MODEL_FULL_FEATURES_XY, MODEL_XSECTION_ALT_03, + OUTFALLS_MODIFIED, BUILD_INSTR_01) from swmmio.version_control import utils as vc_utils from swmmio.version_control import inp from swmmio.utils import functions as funcs +from swmmio.version_control.inp import INPDiff + +import os +import shutil +import pytest + +def makedirs(newdir): + """ + replicate this in Py2 campatible way + os.makedirs(temp_vc_dir_02, exist_ok=True) + """ + if os.path.exists(newdir): + shutil.rmtree(newdir) + os.makedirs(newdir) + def test_complete_inp_headers(): @@ -21,18 +36,90 @@ def test_complete_inp_headers(): def test_create_inp_build_instructions(): + temp_vc_dir_01 = os.path.join(DATA_PATH, 'vc_dir') + temp_vc_dir_02 = os.path.join(DATA_PATH, 'vc root with spaces') + temp_vc_dir_03 = os.path.join(temp_vc_dir_02, 'vc_dir') inp.create_inp_build_instructions(MODEL_XSECTION_BASELINE, MODEL_XSECTION_ALT_03, - 'vc_dir', + temp_vc_dir_01, 'test_version_id', 'cool comments') - latest_bi = vc_utils.newest_file('vc_dir') + latest_bi = vc_utils.newest_file(temp_vc_dir_01) bi = inp.BuildInstructions(latest_bi) juncs = bi.instructions['[JUNCTIONS]'] assert (all(j in juncs.altered.index for j in [ 'dummy_node1', 'dummy_node5'])) + # assert (filecmp.cmp(latest_bi, BUILD_INSTR_01)) + shutil.rmtree(temp_vc_dir_01) + + # reproduce test with same files in a directory structure with spaces in path + makedirs(temp_vc_dir_02) + shutil.copy(MODEL_XSECTION_BASELINE, temp_vc_dir_02) + shutil.copy(MODEL_XSECTION_ALT_03, temp_vc_dir_02) + MODEL_XSECTION_BASELINE_spaces = os.path.join(temp_vc_dir_02, MODEL_XSECTION_BASELINE) + MODEL_XSECTION_ALT_03_spaces = os.path.join(temp_vc_dir_02, MODEL_XSECTION_ALT_03) + + inp.create_inp_build_instructions(MODEL_XSECTION_BASELINE_spaces, + MODEL_XSECTION_ALT_03_spaces, + temp_vc_dir_03, + 'test_version_id', 'cool comments') + + latest_bi_spaces = vc_utils.newest_file(temp_vc_dir_03) + bi_sp = inp.BuildInstructions(latest_bi_spaces) + + juncs_sp = bi_sp.instructions['[JUNCTIONS]'] + print(juncs_sp.altered) + assert (all(j in juncs_sp.altered.index for j in [ + 'dummy_node1', 'dummy_node5'])) + + shutil.rmtree(temp_vc_dir_02) + + +def test_inp_diff_from_bi(): + + change = INPDiff(build_instr_file=BUILD_INSTR_01, section='[JUNCTIONS]') + + alt_juncs = change.altered + assert alt_juncs.loc['dummy_node1', 'InvertElev'] == pytest.approx(-15, 0.01) + assert alt_juncs.loc['dummy_node5', 'InvertElev'] == pytest.approx(-6.96, 0.01) + + # test with spaces in path + temp_dir_01 = os.path.join(DATA_PATH, 'path with spaces') + makedirs(temp_dir_01) + shutil.copy(BUILD_INSTR_01, temp_dir_01) + BUILD_INSTR_01_spaces = os.path.join(temp_dir_01, BUILD_INSTR_01) + + change = INPDiff(build_instr_file=BUILD_INSTR_01_spaces, section='[JUNCTIONS]') + + alt_juncs = change.altered + assert alt_juncs.loc['dummy_node1', 'InvertElev'] == pytest.approx(-15, 0.01) + assert alt_juncs.loc['dummy_node5', 'InvertElev'] == pytest.approx(-6.96, 0.01) + + # test with parent models in directory structure with spaces in path + temp_dir_02 = os.path.join(DATA_PATH, 'root with spaces') + temp_dir_03 = os.path.join(temp_dir_02, 'vc_dir') + makedirs(temp_dir_02) + shutil.copy(MODEL_XSECTION_BASELINE, temp_dir_02) + shutil.copy(MODEL_XSECTION_ALT_03, temp_dir_02) + MODEL_XSECTION_BASELINE_spaces = os.path.join(temp_dir_02, MODEL_XSECTION_BASELINE) + MODEL_XSECTION_ALT_03_spaces = os.path.join(temp_dir_02, MODEL_XSECTION_ALT_03) + + inp.create_inp_build_instructions(MODEL_XSECTION_BASELINE_spaces, + MODEL_XSECTION_ALT_03_spaces, + temp_dir_03, + 'test_version_id', 'cool comments') + + latest_bi_spaces = vc_utils.newest_file(temp_dir_03) + change = INPDiff(build_instr_file=latest_bi_spaces, section='[JUNCTIONS]') + alt_juncs = change.altered + assert alt_juncs.loc['dummy_node1', 'InvertElev'] == pytest.approx(-15, 0.01) + assert alt_juncs.loc['dummy_node5', 'InvertElev'] == pytest.approx(-6.96, 0.01) + + shutil.rmtree(temp_dir_01) + shutil.rmtree(temp_dir_02) + # def test_add_models(): # inp.create_inp_build_instructions(MODEL_BLANK, @@ -73,7 +160,6 @@ def test_modify_model(): outfalls.loc[:, 'InvertElev'] = pd.to_numeric(outfalls.loc[:, 'InvertElev']) + rise of_test.loc[:, 'InvertElev'] = pd.to_numeric(of_test.loc[:, 'InvertElev']) - # copy the base model into a new directory newdir = os.path.join(baseline.inp.dir, str(rise)) os.mkdir(newdir) @@ -87,4 +173,4 @@ def test_modify_model(): of2 = m2.inp.outfalls shutil.rmtree(newdir) # of2.to_csv(os.path.join(newdir, baseline.inp.name + "_new_outfalls.csv")) - assert(of2.loc['J4', 'InvertElev'].round(1) == of_test.loc['J4', 'InvertElev'].round(1)) + assert (of2.loc['J4', 'InvertElev'].round(1) == of_test.loc['J4', 'InvertElev'].round(1)) diff --git a/swmmio/utils/dataframes.py b/swmmio/utils/dataframes.py index d7ba3a7..950d050 100644 --- a/swmmio/utils/dataframes.py +++ b/swmmio/utils/dataframes.py @@ -7,7 +7,7 @@ def create_dataframeBI(bi_path, section='[CONDUITS]'): """ - given a path to a biuld instructions file, create a dataframe of data in a + given a path to a build instructions file, create a dataframe of data in a given section """ headerdefs = funcs.complete_inp_headers(bi_path) diff --git a/swmmio/utils/text.py b/swmmio/utils/text.py index 0ea7764..1c7e0bb 100644 --- a/swmmio/utils/text.py +++ b/swmmio/utils/text.py @@ -74,7 +74,7 @@ def extract_section_from_inp(filepath, sectionheader, cleanheaders=True, return_string=False, skiprows=0, skipheaders=False): """ INPUT path to text file (inp, rpt, etc) and a text string - matchig the section header of the to be extracted + matching the section header of the to be extracted creates a new text file in the same directory as the filepath and returns the path to the new file. diff --git a/swmmio/version_control/inp.py b/swmmio/version_control/inp.py index 57a441c..9554e83 100644 --- a/swmmio/version_control/inp.py +++ b/swmmio/version_control/inp.py @@ -4,14 +4,10 @@ from swmmio.utils.dataframes import create_dataframeINP, create_dataframeBI from swmmio.utils import text import pandas as pd -from datetime import datetime import os -import sys from copy import deepcopy -if sys.version_info[0] < 3: - from io import StringIO -else: - from io import StringIO + + problem_sections = ['[CURVES]', '[TIMESERIES]', '[RDII]', '[HYDROGRAPHS]'] @@ -27,20 +23,20 @@ class BuildInstructions(object): def __init__(self, build_instr_file=None): - #create a change object for each section that is different from baseline + # create a change object for each section that is different from baseline self.instructions = {} self.metadata = {} if build_instr_file: - #read the instructions and create a dictionary of Change objects + # read the instructions and create a dictionary of Change objects allheaders = funcs.complete_inp_headers(build_instr_file) instructions = {} for section in allheaders['order']: change = INPDiff(build_instr_file=build_instr_file, section=section) - instructions.update({section:change}) + instructions.update({section: change}) self.instructions = instructions - #read the meta data + # read the meta data self.metadata = vc_utils.read_meta_data(build_instr_file) def __add__(self, other): @@ -50,16 +46,15 @@ def __add__(self, other): new_change = change_obj + other.instructions[section] bi.instructions[section] = new_change else: - #section doesn't exist in other, maintain current instructions + # section doesn't exist in other, maintain current instructions bi.instructions[section] = change_obj for section, change_obj in other.instructions.items(): if section not in self.instructions: bi.instructions[section] = change_obj - - #combine the metadata - #deepcopy so child structures aren't linked to original + # combine the metadata + # deepcopy so child structures aren't linked to original bi.metadata = deepcopy(self.metadata) otherbaseline = other.metadata['Parent Models']['Baseline'] otheralternatives = other.metadata['Parent Models']['Alternatives'] @@ -70,7 +65,7 @@ def __add__(self, other): return bi def __radd__(self, other): - #this is so we can call sum() on a list of build_instructions + # this is so we can call sum() on a list of build_instructions if other == 0: return self else: @@ -83,7 +78,7 @@ def save(self, dir, filename): if not os.path.exists(dir): os.makedirs(dir) filepath = os.path.join(dir, filename) - with open (filepath, 'w') as f: + with open(filepath, 'w') as f: vc_utils.write_meta_data(f, self.metadata) for section, change_obj in self.instructions.items(): section_df = pd.concat([change_obj.removed, change_obj.altered, change_obj.added]) @@ -97,35 +92,36 @@ def build(self, baseline_dir, target_path): """ basemodel = swmmio.Model(baseline_dir) allheaders = funcs.complete_inp_headers(basemodel.inp.path) - #new_inp = os.path.join(target_dir, 'model.inp') - with open (target_path, 'w') as f: + # new_inp = os.path.join(target_dir, 'model.inp') + with open(target_path, 'w') as f: for section in allheaders['order']: - #check if the section is not in problem_sections and there are changes - #in self.instructions and commit changes to it from baseline accordingly + # check if the section is not in problem_sections and there are changes + # in self.instructions and commit changes to it from baseline accordingly if (section not in problem_sections - and allheaders['headers'][section] != 'blob' - and section in self.instructions): + and allheaders['headers'][section] != 'blob' + and section in self.instructions): - #df of baseline model section + # df of baseline model section basedf = create_dataframeINP(basemodel.inp.path, section) - #grab the changes to + # grab the changes to changes = self.instructions[section] - #remove elements that have alterations and or tagged for removal + # remove elements that have alterations and or tagged for removal remove_ids = changes.removed.index | changes.altered.index new_section = basedf.drop(remove_ids) - #add elements + # add elements new_section = pd.concat([new_section, changes.altered, changes.added]) else: - #section is not well understood or is problematic, just blindly copy + # section is not well understood or is problematic, just blindly copy new_section = create_dataframeINP(basemodel.inp.path, section=section) - #write the section + # write the section vc_utils.write_inp_section(f, allheaders, section, new_section) + class INPDiff(object): """ This object represents the 'changes' of a given section of a INP file @@ -136,41 +132,43 @@ class INPDiff(object): """ + def __init__(self, model1=None, model2=None, section='[JUNCTIONS]', build_instr_file=None): if model1 and model2: df1 = create_dataframeINP(model1.inp.path, section) df2 = create_dataframeINP(model2.inp.path, section) + m2_origin_string = os.path.basename(model2.inp.path).replace(' ', '-') - #BUG -> this fails if a df1 or df2 is None i.e. if a section doesn't exist in one model + # BUG -> this fails if a df1 or df2 is None i.e. if a section doesn't exist in one model added_ids = df2.index.difference(df1.index) removed_ids = df1.index.difference(df2.index) - #find where elements were changed (but kept with same ID) - common_ids = df1.index.difference(removed_ids) #original - removed = in common - #both dfs concatenated, with matched indices for each element + # find where elements were changed (but kept with same ID) + common_ids = df1.index.difference(removed_ids) # original - removed = in common + # both dfs concatenated, with matched indices for each element full_set = pd.concat([df1.loc[common_ids], df2.loc[common_ids]]) # remove whitespace full_set = full_set.apply(lambda x: x.str.strip() if x.dtype == "object" else x) - #drop dupes on the set, all things that did not changed should have 1 row + # drop dupes on the set, all things that did not changed should have 1 row changes_with_dupes = full_set.drop_duplicates() - #duplicate indicies are rows that have changes, isolate these + # duplicate indicies are rows that have changes, isolate these # idx[idx.duplicated()].unique() - changed_ids = changes_with_dupes.index[changes_with_dupes.index.duplicated()].unique() #.get_duplicates() + changed_ids = changes_with_dupes.index[changes_with_dupes.index.duplicated()].unique() # .get_duplicates() added = df2.loc[added_ids].copy() - added['Comment'] = 'Added'# from model {}'.format(model2.inp.path) - added['Origin'] = model2.inp.path + added['Comment'] = 'Added' # from model {}'.format(model2.inp.path) + added['Origin'] = m2_origin_string altered = df2.loc[changed_ids].copy() - altered['Comment'] = 'Altered'# in model {}'.format(model2.inp.path) - altered['Origin'] = model2.inp.path + altered['Comment'] = 'Altered' # in model {}'.format(model2.inp.path) + altered['Origin'] = m2_origin_string removed = df1.loc[removed_ids].copy() - #comment out the removed elements - #removed.index = ["; " + str(x) for x in removed.index] - removed['Comment'] = 'Removed'# in model {}'.format(model2.inp.path) - removed['Origin'] = model2.inp.path + # comment out the removed elements + # removed.index = ["; " + str(x) for x in removed.index] + removed['Comment'] = 'Removed' # in model {}'.format(model2.inp.path) + removed['Origin'] = m2_origin_string self.old = df1 self.new = df2 @@ -179,8 +177,8 @@ def __init__(self, model1=None, model2=None, section='[JUNCTIONS]', build_instr_ self.altered = altered if build_instr_file: - #if generating from a build instructions file, do this (more efficient) - df = create_dataframeBI(build_instr_file, section = section) + # if generating from a build instructions file, do this (more efficient) + df = create_dataframeBI(build_instr_file, section=section) self.added = df.loc[df['Comment'] == 'Added'] self.removed = df.loc[df['Comment'] == 'Removed'] @@ -196,6 +194,7 @@ def __add__(self, other): return change + def generate_inp_from_diffs(basemodel, inpdiffs, target_dir): """ create a new inp with respect to a baseline inp and changes instructed @@ -206,11 +205,11 @@ def generate_inp_from_diffs(basemodel, inpdiffs, target_dir): NOTE THIS ISN'T USED ANYWHERE. DELETE ???? """ - #step 1 --> combine the diff/build instructions + # step 1 --> combine the diff/build instructions allheaders = funcs.complete_inp_headers(basemodel.inp.path) combi_build_instr_file = os.path.join(target_dir, 'build_instructions.txt') newinp = os.path.join(target_dir, 'new.inp') - with open (combi_build_instr_file, 'w') as f: + with open(combi_build_instr_file, 'w') as f: for header in allheaders['order']: s = '' section_header_written = False @@ -230,28 +229,28 @@ def generate_inp_from_diffs(basemodel, inpdiffs, target_dir): skipheaders=True) if sect_s: - #remove the extra space between data in the same table - #coming from diffrent models. - if sect_s[-2:] == '\n\n': #NOTE Check this section... + # remove the extra space between data in the same table + # coming from diffrent models. + if sect_s[-2:] == '\n\n': # NOTE Check this section... s += sect_s[:-1] else: s += sect_s f.write(s + '\n') - #step 2 --> clean up the new combined diff instructions + # step 2 --> clean up the new combined diff instructions # df_dict = clean_inp_diff_formatting(combi_build_instr_file) #makes more human readable - #step 3 --> create a new inp based on the baseline, with the inp_diff - #instructions applied - with open (newinp, 'w') as f: + # step 3 --> create a new inp based on the baseline, with the inp_diff + # instructions applied + with open(newinp, 'w') as f: for section in allheaders['order']: print(section) if section not in problem_sections and allheaders['headers'][section] != 'blob': - #check if a changes from baseline spreadheet exists, and use this - #information if available to create the changes array + # check if a changes from baseline spreadheet exists, and use this + # information if available to create the changes array df = create_dataframeINP(basemodel.inp.path, section) - df['Origin'] = '' #add the origin column if not there + df['Origin'] = '' # add the origin column if not there if section in df_dict: df_change = df_dict[section] ids_to_drop = df_change.loc[df_change['Comment'].isin(['Removed', 'Altered'])].index @@ -259,16 +258,14 @@ def generate_inp_from_diffs(basemodel, inpdiffs, target_dir): df = df.append(df_change.loc[df_change['Comment'].isin(['Added', 'Altered'])]) new_section = df else: - #blindly copy this section from the base model + # blindly copy this section from the base model new_section = create_dataframeINP(basemodel.inp.path, section=section) - #write the section into the inp file and the excel file + # write the section into the inp file and the excel file vc_utils.write_inp_section(f, allheaders, section, new_section) - def create_inp_build_instructions(inpA, inpB, path, filename, comments=''): - """ pass in two inp file paths and produce a spreadsheet showing the differences found in each of the INP sections. These differences should then be used @@ -283,7 +280,7 @@ def create_inp_build_instructions(inpA, inpB, path, filename, comments=''): modela = swmmio.Model(inpA) modelb = swmmio.Model(inpB) - #create build insructions folder + # create build insructions folder if not os.path.exists(path): os.makedirs(path) filepath = os.path.join(path, filename) + '.txt' @@ -292,27 +289,27 @@ def create_inp_build_instructions(inpA, inpB, path, filename, comments=''): # vc_utils.create_change_info_sheet(excelwriter, modela, modelb) problem_sections = ['[TITLE]', '[CURVES]', '[TIMESERIES]', '[RDII]', '[HYDROGRAPHS]'] - with open (filepath, 'w') as newf: + with open(filepath, 'w') as newf: - #write meta data + # write meta data metadata = { - #'Baseline Model':modela.inp.path, - #'ID':filename, - 'Parent Models':{ - 'Baseline':{inpA:vc_utils.modification_date(inpA)}, - 'Alternatives':{inpB:vc_utils.modification_date(inpB)} - }, - 'Log':{filename:comments} - } - #print metadata + # 'Baseline Model':modela.inp.path, + # 'ID':filename, + 'Parent Models': { + 'Baseline': {inpA: vc_utils.modification_date(inpA)}, + 'Alternatives': {inpB: vc_utils.modification_date(inpB)} + }, + 'Log': {filename: comments} + } + # print metadata vc_utils.write_meta_data(newf, metadata) for section in allsections_a['order']: if section not in problem_sections: - #calculate the changes in the current section + # calculate the changes in the current section changes = INPDiff(modela, modelb, section) data = pd.concat([changes.removed, changes.added, changes.altered]) - #vc_utils.write_excel_inp_section(excelwriter, allsections_a, section, data) - vc_utils.write_inp_section(newf, allsections_a, section, data, pad_top=False, na_fill='NaN') #na fill fixes SNOWPACK blanks spaces issue - + # vc_utils.write_excel_inp_section(excelwriter, allsections_a, section, data) + vc_utils.write_inp_section(newf, allsections_a, section, data, pad_top=False, + na_fill='NaN') # na fill fixes SNOWPACK blanks spaces issue # excelwriter.save()