Skip to content

harveyz0/CS548-Final-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Title

Line drawing to handbags

Overview

This is the code for my final project for CS548-12 taught by Dr Michael J. Reale at SUNY Poly. We run the edges2handbags dataset through a pix2pix algorithm.

logging

We use the logging module all over. If you need to adjust the logging levels you’ll have to edit pixtopix/main.py directly.

Data

The code will just download the data itself. You can configure the data location in the config files. The data goes into ~/.keras/datasets/ by default.

Models

Currently checkpoints and models are saved into the directory configured under Configs.log_dir. Trained model is in my OneDrive.

Software Dependencies

Tested with tensorflow = 2.13.0 PIL = 10.1.0 matplotlib = 3.8.1 torch_fidelity = ‘0.3.0’

Version Note

DO NOT use tensorflow version 2.15.0 it will break the checkpoint loading. With error `TypeError: Functional._lookup_dependency() takes 2 positional arguments but 3 were given` tensorflow/tensorflow#61265

Tensorflow imports

I try to keep tensorflow from being imported until necessary. This keeps the logging to a minimum.

Conda Note

I did not use our conda environment for this. The conda environment broke most other tools on my system.

FID and KID Results

torch_fidelity = ‘0.3.0’ In order to get the KID and FID results you will need torch_fidelity. I accomplish this by using the conda CV we built in class. You only need torch_fidelity for this one task. It will not be imported any where else. Like wise this one function does not import tensorflow.

Additional Setup

Make sure the pixtopix directory is in your python path or run the main.py from this directory. You can run ./main.py –help

Output Directory

The output directory can be configured in the config file you pass to the program with the option of adding a timestamp to the directory. . ├── checkpoints ├── discriminator.jpg ├── fit ├── generator.jpg ├── image_dump_0.jpg ├── image_dump_1000.jpg ├── model-1000.keras └── operation.cfg

checkpoints

This is where the checkpoints get stored. You just need to load the file ckpt-1 and not the .index extension. `python ./main.py –config ./reale-run.cfg –checkpoint ./reale-run/checkpoints/ckpt-1`

fit

This is the data tensorboard can read. I did not use it.

model-####.keras

A model will be saved along with a checkpoint. This is a keras zipped file. You can pass it in and an image to generate a single image `python ./main.py –config ./reale-run.cfg –load-model ./reale_run/model-999.keras ~/.keras/datasets/edges2handbags/val/100_AB.jpg` This will write out a file 100_AB.jpg.predicated.jpg to the current directory.

image_dump_####.jpg

This will be an output of a predicated image. It’ll be the same image for the run of the program but not across checkpoint loads.

operation.cfg

This will be the config used for the last run of the program.

generator.jpg discriminator.jpg

These are generated by keras as a pictorial representation of the model.

generated/{real,pred}

This is where files generated with the –generate switch are put. The real directory will contain the actual images of handbags and pred will contain the generated images of handbags. You need to give the program a saved model for it to generate images. `python ./main.py –config reale-run.cfg –generate –load-model ./reale_run/model-40000.keras`

Running the project

Everything will be done through this main.py file. You can run –help to get the switches. Values passed in as command line switches will override the same value of a config file.

main.py –help

usage: pixtopix [-h] [-c CHECKPOINT] [-f CONFIG] [-d] [-l [LOAD_MODEL …]] [-g] [-e EVAL]

Module to build and run a pix2pix model

options: -h, –help show this help message and exit -c CHECKPOINT, –checkpoint CHECKPOINT Start from this checkpoint file. Do NOT add the .index to the file path you only need the ckpt-1 or whatever number. -f CONFIG, –config CONFIG Load the provided config file -d, –dump-config Dump the config to stdout -l [LOAD_MODEL …], –load-model [LOAD_MODEL …] Load a model from a directory and then try to push it through all the images passed in -g, –generate Generate a bunch of images from a directory of inputs. Uses the url dataset and extension values in the passed in config -e EVAL, –eval EVAL Eval a directory and print FID and KID

Training

`python ./main.py` Will run the training by default. You can pass it a config file to get it to go where you want it to go. Otherwise it’ll use the default config file. You probably actually want to run `python ./main.py –config ./reale-run.cfg`. The reale-run.cfg will drop all results into a reale_run directory.

Checkpoints

You can put a path to a checkpoint in the config or you can pass in a checkpoint. The program does not attempt to load a checkpoint unless specifically given one. `python ./main.py –config ./reale-run.cfg –checkpoint ./reale_run/checkpoints/ckpt-1` This will load the first checkpoint that was built.

Generation

You can have it generate images by passing in a model. You can add the model path to the config file as well. `python ./main.py –config ./reale-run.cfg –generate –load-model ./reale_run/model-999.keras` It will load the online dataset pointed to by the config file and get the file path from that. It will load the “val” directory and generate an image for every image in that directory. This “val” is hardcoded.

Eval

Once you have generated images you can evaluate them with torch_fidelity. `python ./main.py –config ./reale-run.cfg –eval ./reale_run/generated/` This will run through everything and print out the FID and KID values.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages