A Curious Animal

Born to be curious, born to be animal!

Specification pattern for NodeJS

21 September, 2015
- 4 min read

Although the specification pattern is mainly use in DDD to check business rules, I think the idea of combine rules offers great flexibility in any application architecture, it is suitable for any kind of validations, simplifying and improving reusability and making code clearer. Because of this, some days ago I started working on an implementation of the specification pattern for NodeJS. The code is freely available at github repository and also installable via npmjs. The specification pattern There are tons of good documents and tutorials about the pattern, so I don't want to extent too much here. The best source of information, IMO, is the Eric Evans big blue book Domain-Driven Design: DDD The specification pattern is powerful enough to be used for: validations, queries and creation of objects that satisfies some criteria. So, take into account this post is only related to the first options: the validation of objects that satisfies some critaria. An specification is a piece of code that checks if a business rules is satisfied or not. For example, given a bug tracking system we can create two specification to check if a software project: has few issues if the number of issues reported last month is less 10, and is updated project if the date of the last solved issued it not beyond a week. The great of specifications is we can easily combine them to create complex rules reusing the code, for example, we can create the specification quality project that means a project has few issues and is an updated project. Wikipedia has a nice image showing an UMl diagram class about the specification patter: UML specification pattern We can see a specification is any interface that implements the method and has the , and method to chain specifications. How to use the NodeJS implementation ? I have created two implementations of the pattern: asynchronous and synchronous versions. The synchronous version is fine for those in-memory validations, for example when you do not require query a database. The asynchronous version, on the other hand, is suitable for those cases in which the validation depends on an asynchronous source, like a file, a query to an API, etc. The first step to use the implementation is to include the required version (synchronous or asynchronous): For each business rule (or validation) you need to check, a specification must be created. Next code creates a specification that checks if a number is greater than the one indicated at the specification: Later, to use the previous specification: The base class offers the , and methods we can use to chain specifications and build complex ones. For example: The asynchronous version is suitable if you need to check agains an asynchronous source, like a database, files, etc. The only difference is the way to implement the method, which must be use a callback, for example: and to use it you can make via the callback: Chaining specifications works in the same way as the synchronous version, simply remember the only difference is the way to use the method: Conclusions The post presents a dual implementation for NodeJS, synchronous and asynchronous. There is no reason to use design patterns in a multi-paradigm language like JavaScript. Specification pattern can help when working with validations, simplifying reusability and allowing validations chaining through specifications.

I'm working on ClydeIO

14 September, 2015
- 5 min read

ClydeIO is an open source API gateway and management layer based on nodejs. It is extremely flexibly, configurable and simply to extend. It is designed to simplify the development of new APIs (or simply improving existing ones) adding authentication, logging, rate limiting, etc. Note, the project is currently a proof of concept (I'm currently working to apply it in a real system) and most of the developed filters are basic implementation to demonstrate its usefulness. What is an API gateway and why I need one? The most important part when designing and implementing an API is to model your business correctly and offer a set of well defined operations accordingly with them. Unfortunately, that is only an small part of the job and it is not enough to ensure the success. As a real World system you require to secure your API, store logs, apply rate limits, etc. The task of publishing an API becomes much more complex than understanding your business, you have entered the world of security, monitoring and... the unknown !!! An API gateway is a single point of entry responsible to apply actions (like security or logging) before redirecting the request to your real-do-the-job API. We can see the gateway as a kind (or mix) of firewall and proxy and is really useful implementing microservices. Thanks to ClydeIO you can spend your efforts implementing your business API leaving the rest of things to the gateway. The glory of ClydeIO is its simplicity and its easy to extent with new filters like: Secure an existent API Log access to any resource Rate limiting Filter request (allow/deny) depending on the query parameters Cache data or whatever you need Because it is based on node we can use the myriad of awesome node packages out there and integrate within your filters. Why a new system instead contribute to an existent one? I was looking for similar projects before start ClydeIO. There is plenty of services that provides same functionalities (and many more) as well as many projects with a great maturity level, but no one satisfies my needs. Services implies a cost for its usage that, sometimes, can be hard to calculate and in some cases requires you adapt your systems (your business API) to accommodate to the service requirements. Other software projects means you must be comfortable with the technology they are implemented with, mainly its programming language and database used to store configuration and information. One thing I saw in common in most of the software projects is the fact they do what they do, that is, they are prepared to make many things, do it well but are not allowed to extend the gateway easily with new requirements users can have. They are designed to make the most common things: rate limiting, security, logging, etc but it is hard to know how we can extend the gateway to send us an email when there was more than ten invalid accesses. In addition, I found some of them really complex to configure, based on monster XML configuration files. Once last comment on why I created ClydeIO: to take advantage of the node modules. There exists other API gateways implemented using NGINX server, lua, go or python language but nonetheless implemented with node. To be honest I must point here the StrongLoop LoopBack API Gateway product. Current status ClydeIO is currently a proof of concept and I have implemented a bunch of filters to test its capabilities. Currently all configuration is provided via a JSON file. That's nice and simply but not much secure when working with authentication filters that needs specify users and passwords or with a real scenario that requires manage hundred of users. Because of this I'm currently working hard trying to create the configuration module, responsible to manage the whole configuration, and designing to be easy to implement for different backends: memory, redis, mongodb, postgresql, ... I have great feeling about ClydeIO's possibilities but to be honest it is currently a personal side project I write on my few free time. I have no contributors neither sponsors. So, if you arrive to this page and are interested in the project feel free to contact with me and start helping with your money or time :) Documentation I have create the ClydeIO github organization to host all the related projects related with ClydeIO. We can differentiate among the core project, so called clydeio too and the rest of projects that are clyde's filters. The current core project documentation can be found at the project's wiki: https://github.com/clydeio/clydeio/wiki. It will probably change soon, once finished the configuration module, but the concepts remains the same. Contributions As a said, for the moment this is a personal project I develop on my free time. So don't hesitate to contact with me for any kind of support and help.

How to read from a writable stream (http.ServerResponse) in Node

31 August, 2015
- 3 min read

I'm working on a personal side project (when I have free time to spend on) where a bunch of middlewares are chained to do something awesome (Take a look at ClydeIO. Note it is currently a proof of concept). Remember a middleware is nothing more than a function that receives three parameters: the , the and the function: The parameter is an instance of , a readable stream, while the parameter is an instance of , a writable stream (something similar to Java servlets). Usually a middleware reads the request, optionally attach more data or modifies it, maybe writes some content on the response and continues the execution in the next middleware invoking the function. If any of the middlewares writes content and invokes the method the response is sent to the client and middleware chain is aborted. Note, I'm working with Connect and not Express, but the concepts are the same. Express uses the method to write content, which is based in the and core node methods. Also Express extends with request and response object with its own methods. Now, suppose in a middleware you need to get the current content (the response body) other middlewares has been written. The response is an output stream and it is designed to have write oriented operations: write content, write headers, etc but not read operations. So how we can read the content written on a writable stream? Do not confuse the response reference we have in a middleware (or simply in a listener), which is a writable stream, with the object we obtain when using . With we obtain an instance of and when listen on it for the event we get a , a readable stream. That is, the response we obtain is a readable stream where we can read data sent from the server. In a middleware we are the server and are responsible to write data in an output/writable stream. The solution I found is override the method. The method is defined by the interface and it is mandatory each writable stream class implements it. So, they idea is to override the method in a way we can store the data is written and later invoking the parent method so they what usually do: Now, the second middleware can read all the chunks of data any other middleware writes to the response and continues the normal execution of the function invoking the original version. Note, if you need to access the response headers the solution is similar but, in this case, you need to override the methods , and .

Using random unsplash images on your website

17 August, 2015
- 2 min read

Recently I updated my we page and one of the nice features I included on it is the fact a random image is placed in the header for each page. These images are obtained from the great unsplash, a free (do whatever you want) high-resolution photos project. As its slogan says, unsplash contains tons of really good images, both by its resolution and the photographies. To embed the unsplash images in my web site I used the unsplash.it service, its slogan says Beautiful placeholders using images from unsplash. It was created by David Marby and Nijiko Yonskai and the code is publicly available at the github unsplash-it repository (if you are interested it is a NodeJS based project). But what does exactly unsplash.it? Basically it returns images from unsplash project with some steroids. Retrieve an image indicating the desired width and height: https://unsplash.it/500/400 Using a square image: https://unsplash.it/500 Get a random image: https://unsplash.it/500/400?random or maybe grey styled: https://unsplash.it/g/500/400?random Obtain a list of available images: https://unsplash.it/list And retrieve a given image: https://unsplash.it/500?image=123 Maybe a bit blured: https://unsplash.it/500?image=123&blur And finally, cropping if desired: https://unsplash.it/500?image=123&gravity=east Yeah !!! unsplash.it is nice service easy and free to use.

Migrating from WordPress to Jekyll

11 August, 2015
- 3 min read

WordPress is awesome, an incredible project with thousands of plugins that does not need any presentation. What start as a blogging platform has become a powerful framework to build almost anything. All that is fine, but... recently I migrated this blog from WordPress to Jekyll. The problem for me is WordPress starts as a quick solution to have a blog but has increasingly become a two big resource to administer. I have installed plugins to manage spam, to limit access login (for security reasons), to share content on social networks, to highlight source code, to add Google Analytics code, etc. I have a tech blog. I write about programming, libraries and how to do things, accompanied by sample code. WordPress WYSIWYG editor is great but not agile to write code. On the other side the test view of the content is poor and becomes affected by the visual view each time you change between tabs. Jekyll is much simpler than WordPress. It is much more less powerful than WordPress in many aspects but if offers me what I need in a more simpler way. Jekyll is an static site generator. Opposite to WordPress, written in PHP, where each request implies a query to get data, apply some process and finally return the resultant page, Jekyll has a completely different philosophy. With Jekyll you write content in markdown syntax and applying some rules, mainly a front matter section on pages and posts. Once you agree with your content Jekyll compiles them and builds an static version of all your content ready to be hosted. For those with a GitHub account, remember if you put all the content on a repository there is no need to compile because GitHub does it for you each time you push commit. Yes, you write directly in markdown syntax on any text editor, there is no visual editor like in WordPress, but it is exactly what I need: to have as much control as I can over what I write. In fact, the writing process results quickest than using the WYSIWYG mode in WordPress, especially when I require to write code. The migration process To be honest I must admit in my case it was a bit traumatic, in part because I'm a bit tiquismiquis with my things. The steps about the migration were basically: Export your WordPress content on Tools > Export menu. Use jekyll-import tool to create a jekyll site from the WordPress export file. The tool creates all the posts and pages but unfortunally attach to much information on the front-matter section I don't like neither want. So I must apply many modification manually on each postand, in most cases to translate from HTML to markdown syntax (using the only to-markdown translator tool). Migrate all WordPress comments to Disqus platform. For this, I need to install the Disqus Comment System plugin and have a configured account. Once you configure the plugin all the comments are migrated automatically. Here the step was a bit more cumbersome because the new blog is in a different URL. So I need to export all the Disqus discussions, update manually all the URLs and import again in the system. Chose a jekyll theme and adapt a bit for my needs. I chose the Kasper, a Ghost's theme ported to jekyll. I changed the theme to include some more header buttons (one for each page) and add random images, from the unsplash project using the nice unsplash.it service. The result is I have a blog I feel really comfortable :)

Working with different environments on jekyll

04 August, 2015
- 3 min read

Jekyll is an open source static site generator. It allows writing content in markdown (also HTML) using some rules, like adding some front matter on pages or posts. Later jekyll compiles all the code and generates an static version for each page and post. The benefits of having an static site are mainly speed. Contrary to other blog engines like WordPress, where each request requires server works querying data, processing and returning the page, jekyll only does that work once, when the static site is built. Jekyll is also well known by the fact it is used by GitHub. If you host your jekyll posts on a GitHub repository you don't require to compile them because GitHub does for you each time you make a commit. If you are interested on this topic see pages and take a look at pages basics. A few days ago I migrate this blog from WordPress to Jekyll. Note its source code, with all the post I wrote and will write al publicly available at https://github.com/acanimal/acuriousanimal-blog.github.io. The method I follow to write is: I write in my local version of jekyll. Once I finish an article... I run jekyll locally to see all looks fine. If all is fine... I commit the changes to the GitHub repository. This makes GitHub compiles the posts and generates the same content like me. Something I missed from the beginning was the possibility to have a different configuration depending if on the environment. When working locally I want all links and images point to my local computer, while when I upload content to GitHub I want links to be relative to GitHub hosting. Hopefully this is really easy to achieve using more than one configuration file. I have a main where all variables are set. Next is part of my current configuration file: In addition, I have a file I only use when working locally. This files contains all variables I want to redefine when working locally. For example: Jekyll commands allow to specify more than one configuration file. We simply must remember a variable existent in a configuration file can be overridden with a value on the second file. Having this in mind, I run jekyll locally with the next command: This produces the value of specified at becomes overridden by the value of variable specified at .