We're building a social networking site where users can log, rate and review albums.
When doing focus testing for our music app we identified a trend in testers - that they had strong emotional connections to albums, but had not listened to an album in a really long time, thanks to spotify's discover algorithm. We decided to build out app as a place where music fans could re-introduce themselves to the idea of long-form music consumption, and the album as an object of auteurial intent.
We were conscious of not wanting to build a 'best new music' app, or even a music streaming app. We tried to emphasise albums over singles, and shied away from too much emphasis on individual songs. We also wanted to ensure that our site wasn't just a companion app to an existing streaming service (spotify, applemusic, etc) so avoided relying too heavily on eg. the spotify API
We took focus group feedback onboard but also made sure that all our decisions on what features to and not to include were strongly focused on a 'music fandom' over 'casual music listener' user experience.
We structured the 3 weeks like so
- Design week
- UI focus
- Database integration. Within each day we set ourselves tasks, and tried to focus on building the minimum viable product at all times. User research indicated that lots of people expressed at least some dissatisfaction with the way they listened to music compared to pre-streaming, which confirmed our model. We also found that people used widely disparate streaming platforms which informed the decision not to lean too heavily on one API or the other
We have put a strong emphasis on clean, readable UI and straightforward semantic HTML to ensure that the site is accessible to as many users as possible, and can be made sense of by users using a screen reader or other accessibility tools.
We need to make sure that our use of the last.fm API falls within their acceptable criteria for building an app.The team was able to work both in pairs and individually, making sure that we were working as efficiently as possible. At some point splitting off into four meant that there were parts of the website that some people on the team were unfamiliar with, and had we more time, I would have discouraged this, but it worked out more or less ok.
From our user testing of a Figma prototype:
- We found out our homepage and our 'Discover'page were too similar, and changed them accordingly.
- We found that users were not immediately aware of the purpose of our app, and so included a call to action on the front page
- That some of the UI was too small and needed to be bigger/more visible
It's hard to say how right or wrong our assumptions were without having a base of users to test the end product, but I think that we were pretty accurate about our user profiles based on the focus testing we did, so I don't have much of a reason to doubt that our judgement was pretty fair.
We would have liked to build a 'hot 100' list of albums that had had a lot of activity (likes, reviews, listens) on our page. Overall I'm happy with what we were able to achieve and how quickly we were able to build it.
Sasha was initially in charge of planning sprints, unfortunately some personal issues interfered with this task and from then on decisionmaking was shared between the group.
Cemal was in charge of Quality Assurance, making sure to test that the user stories were able to be satisfactorily completed.
Michael was in charge of Deployment, and decided what we were going to use to build the app, where it was going to be hosted, and ensured that it successfully deployed to Vercel.
Jihye was in charge of User Experience, ensuring that the site was written with the user in mind, and that navigating the website was a logical process.
Separating the roles of the team and making them the responsibility of each member made sure that we never overextended ourselves by worrying too much about each other's roles, so we could focus on our own thing - however we were all happy to work on each others' tasks, as long as the role holder was willing to take point.
We hoped that on building this product, users would relate to their music consumption a little differently, and be more wary of Spotify's buisiness model. We were conscious of the fact that we might have created a space that could foster gatekeeping or elitism, which was not our goal, and were wary of heaping another social networking site onto users.
We worked on a shared Miro board and planned the most straightforward route through the site to fulfil each user story we came up with. When working on our figma prototype we made sure that each story could be completed with as few clicks as possible, in an intuitive manner.
We decided that
- server-side rendering made sense for all of our API requests
- a relational database was the most appropriate way to store user data that included profiles, albums, album reviews and a users' followers,
- cloud hosting would allow the project to scale up if it took off
- We wanted the product to be as desirable to a user as possible so we worked on the frontend first.
We made sure that our Miro board had a tech spec that we could refer back to when we were adding features to our site. We tried to forsee what would be required to (for example) add a user to your 'following' list, what SQL queries were likely to be needed and how the tables would have to reference each other in order to modify/display taht data.
We tried wherever possible to ensure that our code was modular and consistently written, to make sure that there were no overlaps or redundancies. We enforced a strict separation of interests within our repository, making sure that everything was sensibly placed.
What interesting technical problems did you have to solve? Outline and apply the rationale and use of algorithms, logic and data structures. (K9, S16)
To quickly and efficiently debug issues we ensured wherever possible that we were working in pairs, with one person taking point and writing the code, and the other person watching the other person for typos, and moving around the file structure in live share to find potential conflicts.
We figured out user stories ('I can log in to the app from the homepage', 'I am able to change my profile name and see it on my profile page', etc) that an imaginary user might like to do in the day-to-day usage of our site, and then ran the tests using Cypress. Using TDD practices we rewrote our code until the tests were able to pass.
Some of the automated tests we ran on Cypress behaved in a way we did not anticipate: The reason for this is that some routes did not redirect to the intended page: Because the interface was pretty intuitive we never noticed this before testing, as it only took a second to navigate to the correct page, but automated testing meant that a human user wasn't able to compensate for the bad paths.
We were impressed with the way that nextjs handled dynamic page creation, as well as the efficiency with which it loaded static props, so we decided to use it, and deploy to Vercel. The continuous integration with Github was also a factor in deciding to use nextjs.
What problems did you encounter during deployment? Trying to deploy to vercel made us aware of a few errors in our code that weren't apparent in dev builds.
It is easy to modify parts of the codebase without affecting others, because wherever possible we have tried to build every function or path as a standalone file - this means that if a function had to be changed, it (best case scenario) wouldn't screw up the rest of the codebase.
When Sasha was forced to take some time away from the project, the rest of the team took time to explain the code to her, which was easy to do because it had been written in a straightforward, modular manner..