Honours Project: Summary

2015
04
May
4:00 pm

Codelight is a web based development environment for HTML, CSS and JavaScript designed for students to share coding knowledge whilst learning. Codelight aims to improve communication between groups of learners either remotely or within a class or workshop by enabling them to collaborate in real time on a single piece of code and request help from others. Users are able to form groups to work together on similar projects and view, suggest improvements and leave feedback on the work of others. Providing more fluid interaction between novices and advanced users gives students the ability to improve by learning from one another.

Posted in: Honours Project
Tagged with:

Honours Project: Critical Reflection

2015
04
May
2:59 pm

Going into fourth year and setting my own brief was a fairly daunting experience. My early concepts were mostly focused around the community and skill sharing. There were quite a few factors which influenced me to finally settle on the concept that would become Codelight. I was introduced to html and php at a relatively young age and would happily spend hours combing through open source applications and trying to build someone functional form the scraps. But many people I speak to view coding as some kind of arcane process which is completely removed from reality, often because they’ve just not had the opportunity or motivation to experience it or don’t know where to look for useful resources. I wanted to create a place where people could interact in real-time whilst tinkering with what one another had written. Noble intentions aside, I have to admit that the technical challenge in building such a complex, real-time application from the ground up was a major contributing factor in my decision.

 

During development this year I was able to build on some of the principles I employed during our previous designing social networks module which I picked up from developing themes and plugins for wordpress. After putting together an early ‘proof of concept’ prototype, I made a point of forcing myself to overcome the ‘whatever works’ mindset of just throwing things together as they are needed that often comes with the excitement of a new project and create a structured foundation. Although I wish I’d had this reformation earlier in the year and saved myself a lot of heavy refactoring throughout the process, being able to see such a sharp contrast between poor, thrown-together code and a well thought-out, structured application really drove home how much it can speed up development and make implementing new features less painful.

At the start of the year, I considered myself reasonably competent with front-end javascript and was expecting Node to be abstracted or augmented to the point that it would be almost like a completely different thing to the javascript I was familiar with. But after having a slight struggle wrapping my head around Node modules and seeing that they both functioned identically, I began to realize that there were entire dimensions to javascript that I knew nothing about. This realization only deepened once I started using Angular.

Angular uses the same Model-View-Controller pattern that I had aimed to build my application around and was invaluable to developing my understanding of how to implement it. However, I adopted Angular at a relatively late point in the project which limited the extent to which I could put this knowledge into practise in the back end of Codelight. Similar to my late adoption of the MVC for the server, delaying the introduction of Angular resulted in the need for some heavy refactoring due to the fact that Angular’s ‘Model’ requires that Javascript have access to data such as items in a gallery or comments on a page in order to manipulate them and I had focused on serving data inside of the markup where possible to enable indexing by search engines. If I were to start the project from scratch or begin another also using Angular, I’d make sure to have a more exposed API to allow Angular to gather the data in much the same format as it is available to Jade to enable the two to complement one another rather than having angular ‘bolted on’ over the rest of the application.

Despite having problems I’d encountered bringing in technologies and architectural patterns late in development, I feel like this was the best way for me personally to learn the technologies and improve. Beyond the fact that the software I used in Codelight was built on Javascript, HTML and CSS, I had never used any of them previously except for one or two small experiments with Node and each came with a distinct learning curve. Angular in particular, which uses two-way data binding to automatically sync HTML elements and their values with variables in javascript to avoid manipulating the DOM directly took a lot of trial and error to get to grips with.

 

Designing the interface was an interesting experience. Graphical design and creating beautiful colour palettes is something that i’ve never had a strong affinity for. So rather than attempting to create an ‘artistically’ beautiful interface, I decided to focus on the more technical elements of the graphical interface and their interactions. During early development, I considered using a framework such as bootstrap to handle some of the heavy lifting for the interface. Eventually I decided against these, partially because such comprehensive frameworks will inevitably have performance overheads which, although small, I couldn’t justify given how many of the elements I’d expected to use. But also because I felt using pre-packaged UI elements would detract from the identity of the application. I spent some time playing with margins and padding, ensuring elements were aligned, fitted together well and didn’t become too cluttered.
Overall, I feel my project was a success. Despite the fact that I pushed myself quite far out of my comfort zone by using the MEAN (MongoDB, Express, Angular, Node) and caused myself a lot of setbacks. During my early planning of the project, I had envisioned a much more broad range of functionality which it would have potentially been possibly to implement if I had focused on simply firing out features. However, I feel like by focusing on a small set of core features, implementing them with best practises and testing them thoroughly with users I’ve improved my skills across the board much more than if I had attempted to poorly build the application to the scale I’d first envisioned. I’ve gained a much greater idea of the development time required for more complex applications which I’m glad I have before entering the professional world.

Tagged with:

Application Structure

2015
11
April
8:11 am

With the addition of galleries and the increased complication of the database structure, the ‘whatever works’ style of development I’d used during my initial proof of concept prototype was beginning to create large problems. The application was built around a central idehandler.js file containing the logic for the editor and asynchronous editing and handled snippet data once loaded from the database which, at the time, comprised almost the entire functionality of the application. In order to clean up the maze of spaghetti code I’d created by allowing my “Let’s just get a prototype working” mindset to persist for too long, I decided to adopt the MVC pattern.

The Model-View-Controller pattern is involves three areas which are, unsurprisingly;
Model: Stores the state of the application.
View: Generates output for the user.
Controller: Handles the logic for the application and updates the model and view.

By using this structure code can be kept as clean and intelligible as possible by splitting functionality into the data stored, the operations which handle the majority of that feature and the interface that is displayed to the user. Having separate models, views and controllers for each feature(e.i. one for the gallery and one for the editor) further increases code clarity.

This choice stemmed from the fact that with the exception of a few fatal blunders, I’d already been building to vaguely this pattern in that I had jade to handle my views, idehandler.js as my one and only controller (But also partially performing the job of the model) and a database.js module trying to act as the model but not actually holding any of the application data itself after loading it from the database.

The main structure of the application is comprised of;

app.js
This is the main file used to create an instance of the server module to start the application. It also loads the model, controller and router modules. The reason this is handled at the highest level of the application is so that they can be passed throughout the application and accessed through a variable rather than having to load them multiple times later during execution. This means that wherever a module is accessed in the application it will have the same state, rather than, for example, creating a separate instances of the snippet model for the IDE and the gallery page which have no knowledge of one another. This method also means that if the directory structure of the application changes, there is only one place the file paths have to be modified.

server.js
This module initializes and stores instances of the http server, websocket server, database connection, initializes the controllers and passes them references to access one another.

router.js
The router handles incoming connections, reads the url, directs them to their appropriate controllers and then renders the Jade templates and sends them to the client.

Controllers
Each primary feature will have a controller to handle the main logic. The controller will retrieve the data from the model modules and process it as required.

Models
Will retrieve data from MongoDB, handle data-related tasks such as adding/removing entries and store snippets which are currently active and being used by the IDE in order to reduce requests to the database.

By using this pattern, I hope to increase the scalability of my project and reduce the amount of time required to add new features later on.

Posted in: Honours Project
Tagged with:

Preprocessors and task automization

2015
22
March
12:26 pm

CSS Preprocessors are used to compile a processed language into pure CSS which can be parsed by a browser. Preprocessors provide extended functionality to css to make code cleaner, more reusable and more scalable. By utilizing things such as variables and mixins, css preprocessors can allow values which would generally be repeated throughout the source such as font styling or a common color in your design.

Preprocessing css during development also means that large css files, which in a complex application would quickly become large and unwieldy, can be broken down into smaller files containing only the css for certain aspects of the application.

So what? If your only concern is styling in css, splitting things up into multiple files may seem unnecessary and messy. However, if you’re working with styling, markup, managing the database and front-end and node javascript it’s much less likely that you’ll be as intimately familiar with the position of certain elements within a large css file. Having multiple files clearly named by view or by page means that you can directly access the styles you’re looking for.

It’s also arguable that you can split css into multiple files without bothering with preprocessors by using @import or simply embedding multiple files in your markup. But loading multiple files on from the side of the user’s browser incurs additional http requests which impact loading time. ‘@import’ing stylesheets from within stylesheets compounds this problem as it requires the first stylesheet to be loaded before even beginning to load additional files.

Using preprocessing for css means that there needs to be software on the server to handle the css before it is distributed to the client. The decision of which preprocessor to use was largely a tossup between SASS and LESS. Ultimately SASS won out as it uses syntax similar to traditional css which i’m more comfortable with whereas LESS uses a whitespace-based syntax. SASS can be installed via npm and utilized by the project server to compile the css at runtime. Doing this would enable the css to be individually tweaked before being sent to the client. However, as this type of functionality isn’t necessary for my needs and will theoretically have a performance overhead, I’m going to use the Grunt task runner to compile the css during development.

Grunt can be configured to monitor files for changes and execute commands when the files are updated.

During development, I’m also going to use grunt to monitor changes to the server source files and automatically restart the server when the source is modified.

Posted in: Honours Project
Tagged with:

SVG icons and grunt

2015
20
March
11:01 pm

When it comes to interface elements, modern browsers rely much less on images than they used to. CSS is capable of replicating the button stylings and basic backgrounds which used to have to be served as images. And while CSS is capable of rendering icons, doing so requires an in-depth knowledge of CSS and some time to put together. Enter the SVG.

SVGs or Scalable Vector Graphics are vector graphics stored in an XML format which can be read by the browser. SVGs have a very small file size especially when minified and gzipped*. By using SVGs over images, we ensure that icons can be scaled almost limitlessly as the browser has access to the paths which comprise the icon. Additionally, by applying css class to an SVG element, the colour can be modified without the need for multiple images or a spritesheet with colourized versions of the icon.

Having said that, spritesheets are still a good idea. As the application becomes more and more complex, the number of icons will steadily increase. Each requiring an http request to be sent when a new user visits the application and increasing load time and reducing performance. This is another task which can be automated by use of Grunt and the grunt-svgstore package which concatenates multiple SVG files into a single file and assigning each SVG declaration an ID based on its original file name which can then be used to reference it from the HTML.

*Minification is the process of removing any unnecessary whitespace or redundant special characters from a text file to make it smaller and gzip compresses responses before sending them to the browser decreasing the size by up to 90%.

Posted in: Honours Project
Tagged with:

Storage with MongoDB and Git

2015
15
March
3:44 pm

Working between my desktop at home and laptop whilst in the studios, working from a flash drive or constantly uploading work quickly stopped being viable as the project became more complex. In order to alleviate this, I decide to use the git version control and github to host my project during development. Git provides a quick method of syncing modifications between multiple machines or developers. Each modification to a file is registered by git, is reversible and can then be pushed to a repository on the github server. When working on from a different location, the complete revision history (or list of modifications to each file) can be pulled from the server to the new machine and work can continue.

There is an active community around Node which provide modules to extend the functionality of Node available via the node package managers. One example of this is Express, which I’ve mentioned in my last post. These are open source projects also hosted on github and are regularly maintained and updated. There is some debate over whether or not node modules should be stored in your repository.

The main reason to commit third party modules to your repository is for a codebase which is if its expected for the application to exist over a timeframe which may outlast the node package manager or to ensure that the it will not break if the module version required is removed from production.

At this stage, i’m going to omit node modules from my repository and rely on npm to handle my dependencies. Between my two computers and deployment server, my application has to run on three different operating systems (OSX Yosemite, Ubuntu Server 14.04 and Windows 8) and a number of node modules are installed platform specific so at this point, storing my dependencies wouldn’t serve any purpose. Certain files or folders can be omitted from git by simply including an entry in git’s ‘.gitignore’ file.
Initially, I had hoped to store my testing MongoDB database within the source directory and use git to sync it between my workspaces to keep stored test data consistent. Although this was intended to save time, while the MongoDB server was active, the database files are locked and not able to be committed by git. To overcome this, the MongoDB must be shut down when committing to git and although this is a relatively quick process, it adds up and makes git -which is a productivity tool- much less productive. To avoid having to stop the server for each commit, the database folder will be added to .gitignore along with node_modules.

Posted in: Honours Project
Tagged with:

Back-end and storage

2015
01
March
6:42 am

Collaboration will be an important feature of my application. To enable users to work together as effectively as possible, communication between users must be as fluid and uninterrupted as possible. Traditional HTTP requests do not lend themselves well to this as they handle requests by providing a response to the request and then closing the connection. Each page loaded from or chat message sent to the server has to be accompanied with header information which varies from ~200 bytes to over 2kb, according to Google’s SPDY research whitepaper. Websockets send headers when establishing a connection and then keep the connection alive allowing subsequent requests allowing them to be transferred with minimal overhead.

Node.js is “ a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications”. Node can be used to deploy javascript as a standalone application to perform routine tasks and act as an http server and handle websocket connections.

Express is a framework for node which provides a set of extended features for Node such as routing to handle incomming requests to the server and direct them based on url or post data.

MongoDB is a NoSQL database which stores records as “documents” rather than table entries used in SQL storage which means that more complex objects can be stored and data such as comments can be stored as an array of objects attached to their subject rather than requiring an entirely new table.

Jade is an HTML templating engine which provides a cleaner syntax for writing html and allows variables to be embedded directly without having to mix languages such as embedding php tags inside html.

These technologies are known to work well together and comprise most of the MEAN (MongoDB, Express, Angular, Node) stack. While I’m still unsure of how i’m going to handle the finer points of the front end interface, I’m leaning towards Angular as its two-way data binding sounds interesting.

Posted in: Honours Project
Tagged with:

Code editing in-browser

2015
25
February
2:24 pm

My first concern when creating a service for people to share code is the best method of allowing people to view and edit code from within the browser. In order to display code in a readable fashion, I’ll need to ensure that the application is capable of displaying indenting and syntax highlighting.

highlightingindenting

The indentation of lines or blocks of code creates visually separate blocks which help to give a clear view of the structure of the program while syntax highlighting creates easily recognizable markers which allow the reader to easily identify certain elements within a block of code, for example a string of text or a variable name.

While both are relatively simple to display as static content, retaining the syntax highlighting during the editing of the code requires much more comprehensive text editing functionality than browsers natively support. Rendering syntax highlighted code – or multi-coloured text in general – in a web browser requires the text to be laced with HTML to dictate colors which must be generated and injected into the string before rendering without interfering with user input.

The most realistic approach to this requirement is to look to pre-built open source software which provides a solution to this. Ace (http://ace.c9.io/) is a tried and tested solution to this problem used by a number of reputable and heavily used services including GitHub and Wikipedia.ace
(From the Ace website)

Ace provides a well documented API and the ability to easily hook into events such as adding and removing lines or selecting text allowing it to be easily extended to include features specific to my application.

I was able to create the most basic prototype for my application in seconds by loading Ace into a panel in one half of the screen and an iframe in the other half which displays the contents of a dataurl compiled from the content of the Ace panel whenever a change is made by a user.

Posted in: Honours Project
Tagged with:

Prototyping and user research

2015
17
February
11:14 am

Paper prototyping
As I started thinking more about how to start to generate user feedback and find out how to best explain the experience I was planning, I started to think about putting together storyboards and paper prototypes. A quick session doodling out how the user would go through the experience I began to run into roadblocks whichever way I tried to do things. No matter how I tried, I found myself constantly using the word “imagine” when thinking of how to present these to the user.
I’m not an overly artistic person and attempting to create a visualization of an experience to help with something as complex as learning was beyond me. So, for better or worse, I scrapped paper prototyping for the time being.

Getting hands on
What I can do is knock together usable prototypes relatively quickly, so I planned to put together a functional prototype which would cover the absolute core functions – To write code, to view other people’s code and to feed back on it. This approach would allow users to get a hands-on experience and generate much more valuable feedback despite the initial development time overhead so I began to think about what sort of user feedback sessions were possible with a functional prototype.

Day to day usage
As a couple of other people are considering javascript/web-based implementations of their projects and we occasionally discuss problems, having a working prototype provides an easy way to share knowledge and gather feedback at once.

Pros
Takes very little time
Provides occasional but constant feedback
Feedback from the primary user group
Ability to observe how people are learning to code first-hand
Mutual benefit – so people are willing to contribute time

Cons
Feedback from only a small number of individuals
Small amounts of sample data however regular

Test groups
Gather a number of people who are familiar with front-end design and development, have them explore the application and put together bits and pieces and see how it fits their day-to-day workflow and what sorts of things would make it a more valuable tool.

Pros
Feedback from people who know their stuff
Large amount of feedback to work with

Cons
Feedback from people who know their stuff and will likely have very different perceptions from people who are only just starting to learn
Little benefit to the testers – may be difficult to generate numbers

Popup tutorials
Provide occasional front-end/javascript tutorial sessions open to all and free providing the application is used. This is by far and away the best method I can think of to conduct user research but has the drawback of being somewhat reliant on my own charisma and public speaking abilities.

Pros
Mutual benefit – potentially people will be willing to attend as long as there are enough people interested in the subject matter
Continual feedback from (Hopefully) the same group of people

Cons
Would require additional time and planning

 

Posted in: Honours Project
Tagged with:

Honours Project: Hang on

2014
17
November
3:22 am

Following my initial ideas i have been doing more research into each of my plans. The more I think about it the more I’m beginning to shy away from my initial primary concept of a streaming video service. The challenges in this project are primarily technical and while the overall social implications of the project are interesting, I’m beginning to think it’s not the direction I want to be taking my research this year.

My reasons for changing aren’t all negatives on the creative streaming side of things. The more thought I put into the ways people share and learn from one another, the more interesting and multifaceted I’m finding the idea of collaborative coding to be. I’ve always found the best learning tool for coding is other people’s code itself, even when undocumented. And in some cases, especially when it’s undocumented. This got me thinking about the services currently available for viewing other people’s code. There is, of course, a plethora of services hosting freely available code to view, use and modify. The thing about these services is that they (Excluding for now tutorial posts/articles) suffer from a lack of searchability. Codepen and jsfiddle are exemplary websites which make sharing small snippets of code incredibly easy but are designed as a means to that end – being an ad hoc service to forums, social networks and Q&A sites such as Stackoverflow rather than being standalone.

Posted in: Honours Project
Tagged with: ,