Tips & Tricks for a Successful Online Portfolio

Our friends at Toptal screen a lot of designers, so over time we have learned what goes into making a captivating and coherent portfolio. Each designer’s portfolio is like an introduction to an individual designer’s skill set and strengths and represents them to future employers, clients and other designers. It shows both past work, but also future direction. There are several things to keep in mind when building a portfolio, so here is the Toptal Guide of tips and common mistakes for portfolio design.

1. Content Comes First

The main use of the portfolio is to present your design work. Thus, the content should inform the layout and composition of the document. Consider what kind of work you have, and how it might be best presented. A UX designer may require a series of animations to describe a set of actions, whereas the visual designer may prefer spreads of full images.

The portfolio design itself is an opportunity to display your experiences and skills. However, excessive graphic flourishes shouldn’t impede the legibility of the content. Instead, consider how the backgrounds of your portfolio can augment or enhance your work. The use of similar colors as the content in the background will enhance the details of your project. Lighter content will stand out against dark backgrounds. Legibility is critical, so ensure that your portfolio can be experienced in any medium, and considers all accessibility issues such as color palettes and readability.

You should approach your portfolio in the same manner you would any project. What is the goal here? Present it in a way that makes sense to viewers who are not essentially visually savvy. Edit out projects that may be unnecessary. Your portfolio should essentially be a taster of what you can do, a preparation for the client of what to expect to see more of in the interview. The more efficiently that you can communicate who you are as a designer, the better.

2. Consider Your Target Audience

A portfolio for a client should likely be different than a portfolio shown to a blog editor, or an art director. Your professional portfolio should always cater to your target audience. Edit it accordingly. If your client needs branding, then focus on your branding work. If your client needs UX Strategy than make sure to showcase your process.

Even from client to client, or project to project your portfolio will need tweaking. If you often float between several design disciplines, as many designers do, it would be useful to curate a print designer portfolio separately from a UX or visual design portfolio.

3. Tell the Stories of Your Projects

As the design industry has evolved, so have our clients, and their appreciation for our expertise and what they hire us to do. Our process is often as interesting and important to share with them, as the final deliverables. Try to tell the story of your product backwards, from final end point through to the early stages of the design process. Share your sketches, your wireframes, your user journeys, user personas, and so on.

Showing your process allows the reader to understand how you think and work through problems. Consider this an additional opportunity to show that you have an efficient and scalable process..

4. Be Professional in Your Presentation

Attention to detail, both in textual and design content are important aspects of any visual presentation, so keep an eye on alignment, image compression, embedded fonts and other elements, as you would any project. The careful treatment of your portfolio should reflect how you will handle your client’s work.

With any presentation, your choice of typeface will impact the impression you give, so do research the meaning behind a font family, and when in doubt, ask your typography savvy friends for advice.

5. Words Are As Important As Work

Any designer should be able to discuss their projects as avidly as they can design them. Therefore your copywriting is essential. True, your work is the main draw of the portfolio – however the text, and how you write about your work can give viewers insight into your portfolio.

Not everyone who sees your work comes from a creative, or visual industry. Thus, the descriptive text that you provide for images is essential. At the earlier stages of a project, where UX is the main focus, often you will need to complement your process with clearly defined content, both visual diagrams, and textual explanation.

Text can also be important for providing the context of the project. Often much of your work is done in the background, so why not present it somehow? What was the brief, how did the project come about?

Avoid These Common Mistakes

The culture of the portfolio networks like Behance or Dribble have cultivated many bad habits and trends in portfolio design. A popular trend is the perspective view of a product on a device. However, these images often do little to effectively represent the project, and hide details and content. Clients need to see what you have worked on before, with the most logical visualisation possible. Showcasing your products in a frontal view, with an “above the fold” approach often makes more sense to the non-visual user. Usually, the best web pages and other digital content are presented with no scrolling required. Avoid sending your website portfolio as one long strip, as this is only appropriate for communicating with developers.

Ensure that you cover the bases on all portfolio formats. Today it is expected for you to have an online presence, however some clients prefer that you send a classic A4 or US letterhead sized PDF. You need to have the content ready for any type of presentation.

Try to use a consistent presentation style and content throughout the projects in your portfolio. Differentiate each project with simple solutions like different coloured backgrounds, or textures, yet within the same language.

Source: Toptal

Build Ultra-Modern Web Apps with Angular Material

At the Google I/O Conference back in 2014, Google announced Material Design, their new design language. They have since converted much of their popular applications to adhere to this new spec in an effort to provide a consistent experience. Now they are trying to convince you to follow along as well.

Angular Material: Superheroic Javascript Framework Meets Ultra-Modern Design

What is Material Design?

After a visit to the official Material Design spec, you will immediately get a feeling of ultra-modern minimalism. Basic shapes and flat colors are the theme here. Going through the documentation is quite an experience. I recommend taking a look for yourself, but I will summarize it here.


The purpose is to create a visual language that synthesizes classic principles of good design with the innovation and possibility of technology and science. Also to develop a single underlying system that allows for a unified experience across various platforms and device sizes.


Material Design is founded on three principles.

Material Is the Metaphor

Inspired by the study of paper and ink, the material lives in 3D space and is grounded in tactile reality. It gives the illusion of space by using realistic shadows. The paper material must abide by the laws of physics (i.e. two pieces of paper may not travel through each other), but may supercede the physical world (i.e. a paper may grow or shrink).

Bold, Graphic, Intentional

Deliberate color choices, edge-to-edge imagery, large-scale typography, and intentional white space create a bold and graphic interface that immerse the user in the experience. The Floating Action Button, or FAB, is a prime example of this principle. Have you noticed that little circle with the ‘plus’ symbol floating around in your Google Inbox app? Material Design makes it very apparent that this is an important button.

Motion Provides Meaning

Motion is meaningful and appropriate, serving to focus attention and maintain continuity. Feedback is subtle yet clear. Transitions are efficient yet coherent. The main point here is to animate only when it has a purpose and not to overdo it.

How does AngularJS fit into Material Design?

AngularJS, Google’s “Superheroic JavaScript MVW Framework”, addresses many of the challenges encountered in developing single-page applications (SPA). It provides the framework needed for creating modern web applications that connect to APIs and never need the page to be refreshed.

AngularJS: A New Approach

Angular is what HTML would have been, had it been designed for applications. HTML is a great declarative language for static documents, but creating dynamic applications not so much.

Creating dynamic applications with HTML has always been an exercise in tricking the browser into doing things it wasn’t meant to do. There are a couple of approaches to doing this.

  1. Library – a collection of functions. (jQuery)
  2. Framework – code dynamically fills in static elements when needed. (Durandal, Ember)

Angular takes a different approach to solve this problem. Instead of struggling with the HTML it is given, it creates new HTML constructs. Angular teaches the browser new HTML syntax through a construct called ‘directives’. Angular comes with a set of these directives built-in, but also allows you to create custom directives, so it allows you to write your own HTML elements.

Wouldn’t it be neat if Google created a set of directives based on Material Design principles?

Introducing Angular Material

Google is actively developing Angular Material, an implementation of Material Design in AngularJS. Angular Material provides a set of reusable UI components based on the Material Design system. Angular Material is composed of several pieces. It has a CSS library for typography and other elements, it provides an interesting JavaScript approach for theming, and its responsive layout uses a flex grid. But the most appealing feature of Angular Material is its amazing collection of directives.

Getting Started

I have created an open source project to help jumpstart your next Angular Material project. The purpose of this project is to give an example of everything Angular Material has to offer, all under one roof. Navigation, paging, theming, and the entire collection of directives are ready to go, all you have to do is feed in your data and bind it to the HTML.

Take a look at the demo here or fork the code on GitHub.


Directives are a core Angular feature. Angular comes with several directives that you use all of the time like ng-model or ng-repeat. They are a very important piece of Angular that makes the framework function as it should.

How to Use an Angular Material Directive

Angular Material extends this directive library with a set of beautiful Material Design inspired directives. Angular Material directives are HTML tags that begin with ‘md’; short for Material Design. They couldn’t be much easier to use. For example, let’s take a look at the good old button.

A standard HTML button might look something like this.

<button>Click Me</button>

An Angular Material button looks like this.

<md-button>Click Me</md-button>

And this is all that is needed to make a Material button. Now, there are several other options that are available for this directive such as theming it and raising it from the surface to imply importance.

<md-button class="md-raised md-primary md-hue-1">Click Me</md-button>


Services are also core to Angular functionality. They are used to share code across the application. A common core service like $http is used and reused for data calls in Angular applications.

Angular services are:

  1. Lazily instantiated – Angular only instantiates a service when an application component depends on it.
  2. Singletons – Each component dependent on a service gets a reference to the single instance generated by the service factory.

How to Use an Angular Material Service

Angular Material comes packaged with some services that provide some extra functionality to the application. They also contribute to the performance of some of the directives. A great example of a service is the ‘toast’.

A toast is a small notification that slides in from the top of the screen and goes away after a few seconds. Using this service is easy.

In JavaScript,

      $mdToast.simple('Simple Toast!')
        .position('left bottom')

This example shows a simple toast that pops up on the bottom left of the screen and retreats after 3 seconds.

Some services can be personalized with custom templates. In this case, the $mdToast service can use a custom HTML template by using the md-toast directive.


Material Design is a visual language where themes convey meaning through color, tones, and contrast. These themes are expressed throughout the components in the entire application to provide a more unified feel.

According to the Material Design guidelines, you must “limit your selection of colors by choosing three color hues from the primary palette and one accent color from the secondary palette.” Angular Material makes following this guideline simple by using JavaScript to configure the theme. But first, what is a palette and a hue?

  • Hue: A hue is a single color in a palette.
  • Palette: A palette is a collection of hues.

For example, a palette would be ‘green’ and a hue is a particular shade of green. Angular Material comes packaged with all of the valid palettes from the Material Design spec. You can learn about more about the valid color palettes here.

Theming your project is a piece of cake. In the app.js file, set your desired palettes and hues using the Theming Provider service.

angular.module('myApp', ['ngMaterial'])
.config(function($mdThemingProvider) {
    .primaryPalette(‘cyan’, {
      'default': '400',
      'hue-1': '100',
      'hue-2': '600',
      'hue-3': 'A100'

Using the Theme

To apply the theme to the components, set the class of the element to the desired palette and hue.

<md-button class="md-primary">Click me</md-button>
<md-button class="md-primary md-hue-1">Click me</md-button>
<md-button class="md-primary md-hue-2">Click me</md-button>
<md-button class="md-accent">or maybe me</md-button>
<md-button class="md-warn">Careful</md-button>


Flexbox is the latest and greatest addition to responsive design and Angular Material comes packaged with it. If you are familiar with the Bootstrap grid system, then you should be able to catch on quickly. In fact, Bootstrap is switching to Flexbox in its upcoming release. It has the familiar rows and columns layout you have become accustomed to, but with much more. Learn how to use Flexbox withthis tutorial or study theofficial documentation.

Top 9 Best Angular Material Directives

There are too many Angular Material directives to list them all, so I would like to share with you my favorites.

9. Progress Linear

Often in SPAs, pages need time to load data from the server. If the application shows a blank page during this time, users may think the application is broken and will leave. Let users know the data is loading with theProgress Linear directive. Users will know to wait when they see an animated progress bar indicating that something is happening. Alternatively, use the Progress Circular directive for a round indicator.

8. Date Picker

The Date Picker directive makes choosing a date a clean, simple experience for the user and a true one-liner to write. Simply use md-datepicker and optionally confine the range with md-min-date and md-max-date and that’s it.

7. Autocomplete

Autocomplete provides a pleasant user experience by helping the user choose an option. It is what makes Google’s search engine the best. The Autocomplete directive adds this functionality to your application by completing a user’s words as they type. But the best part about this directive is customization. By filling your autocomplete with md-item-template you can give more meaning to the suggestions. For instance, if a user was searching for names in a company, the autocomplete could show the matching names with their picture and company role, giving a more robust user experience.

6. Bottom Sheet

The bottom sheet is a little menu that slides up from the bottom of your screen, covering content and taking focus. Originally intended to be used solely for mobile devices, the bottom sheet has been gaining popularity on larger screens. To use it, create a template with md-bottom-sheet containing either an md-grid or an md-list for a grid layout or list layout, respectively. Then call it with the Bottom Sheet service, $

5. Input

Input forms are boring and have been since the beginning of the internet. But they don’t have to be! Give yourinputs some flair with the Input directive. Wrap your input tag with md-input-container and watch it come to life. Watch as your placeholder animates into a floating label. Easily validate your input with instant, but subtle, color changes and warning messages. Input directive takes an element that is expected to be boring and delivers a pleasant surprise.

4. Toast

The most aggravating user experience is not knowing what the application is doing. We ease this aggravation with toasters, or little unobtrusive notifications. In the olden days, when we sent a request to the server we waited on that page until the response came back before we could move on. User attention span has dropped drastically since then. In today’s SPAs, we click a button and expect to move along immediately, dealing with the server response when it comes. The Toast directive makes this a piece of cake. A toaster is summoned by simply using the Toast Service, $, and setting the text, duration, and which corner to appear in. Make your own custom toaster with md-toast.

3. Grid List

Are your lists lacking pizazz? Grid lists are an alternative to standard list views. A grid list is best for presenting images, and is optimized for visual comprehension. It works by laying different sized tiles on a grid, giving a scattered, eclectic feel. The tile size and layout then respond to the screen size. This directive is sure to give your application an exciting and fun look.

2. Whiteframe

The concept of space is the core of Material Design and its paper metaphor. Two sheets of paper in the same z-position (or depth), form a seam and must move together. Two overlapping sheets of paper, with different z-positions, form a step. They move independently of each other. To follow the design, we must be able to shift elements along the z-axis. Angular Material provides a simple way to do this. Using the Whiteframe directive, set the class to md-whiteframe-z{x}, where x is the units of depth up from the background. The larger the number, the larger the shadow cast by the paper.

1. Sidenav

Creating a side navigation menu has never been easier. The Sidenav directive places a navigation menu on either the left or right of the screen. Keeping mobile in mind, it swipes in and out as expected, or programmatically with a button click. A nice addition is the lock open feature. The side navigation can be set to lock open when the screen reaches a certain size. By setting the parameter md-is-locked-open=”$mdMedia(‘gt-sm’)” the menu will be tucked away on the phone but locked open on tablet and larger.


Google is converting their most popular applications to Material Design. Now they are heading the development of Angular Material, an implementation of Material Design written in AngularJS. Material Design uses a paper metaphor, bold intentions, and meaningful motion. AngularJS organizes single page applications. Angular Material applies Material Design principles to AngularJS applications.

Material Design is here and Angular Material is a fantastic way to apply the Material design spec to your single page applications. If you want to create your own Angular Material application, don’t waste your time starting from scratch. Rather, start off with a fully functioning app with demos of the directives, theming already set up, and navigation and routing ready to go. Take a look at the demohere or fork the code on GitHub. Of course, you can also learn all about Angular Material by visiting the official documentation.

What do you think about my picks for the best Angular Material directives? Did I get them right? What are your favorites?

This post originally appeared on

The 10 Most Common Mistakes That WordPress Developers Make

We are only human, and one of the traits of being a human is that we make mistakes. On the other hand, we are also self-correcting, meaning we tend to learn from our mistakes and hopefully are thereby able to avoid making the same ones twice. A lot of the mistakes I have made in the WordPress realm originate from trying to save time when implementing solutions. However, these would typically rear their heads down the road when issues would crop up as a result of this approach. Making mistakes is inevitable. However, learning from other people’s oversights (and your own of course!) is a road you should proactively take.


Engineers look like superheroes, but we’re still human. Learn from us.

Common Mistake #1: Keeping the Debugging Off

Why should I use debugging when my code is working fine? Debugging is a feature built into WordPress that will cause all PHP errors, warnings, and notices (about deprecated functions, etc.) to be displayed. When debugging is turned off, there may be important warnings or notices being generated that we never see, but which might cause issues later if we don’t deal with them in time. We want our code to play nicely with all the other elements of our site. So, when adding any new custom code to WordPress, you should always do your development work with debugging turned on (but make sure to turn it off before deploying the site to production!).

To enable this feature, you’ll need to edit the wp-config.php file in the root directory of your WordPress install. Here is a snippet of a typical file:

// Enable debugging
define('WP_DEBUG', true);

// Log all errors to a text file located at /wp-content/debug.log
define('WP_DEBUG_LOG', true);

// Don’t display error messages write them to the log file /wp-content/debug.log
define('WP_DEBUG_DISPLAY', false);

// Ensure all PHP errors are written to the log file and not displayed on screen
@ini_set('display_errors', 0);

This is not an exhaustive list of configuration options that can be used, but this suggested setup should be sufficient for most debugging needs.

Common Mistake #2: Adding Scripts and Styles Using wp_head Hook

What is wrong with adding the scripts into my header template? WordPress already includes a plethora ofpopular scripts. Still, many developers will add additional scripts using the wp_head hook. This can result in the same script, but a different version, being loaded multiple times.

Enqueuing here comes to the rescue, which is the WordPress friendly way of adding scripts and styles to our website. We use enqueuing to prevent plugin conflicts and handle any dependencies a script might have. This is achieved by using the inbuilt functions wp_enqueue_script or wp_enqueue_style to enqueue scripts and styles respectively. The main difference between the two functions is that with wp_enqueue_script we have an additional parameter that allows us to move the script into the footer of the page.

wp_register_script( $handle, $src, $deps = array(), $ver = false, $in_footer = false )
wp_enqueue_script( $handle, $src = false, $deps = array(), $ver = false, $in_footer = false )

wp_register_style( $handle, $src, $deps = array(), $ver = false, $media = 'all' )
wp_enqueue_style( $handle, $src = false, $deps = array(), $ver = false, $media = 'all' )

If the script is not required to render content above the fold, we can safely move it to the footer to make sure the content above the fold loads quickly. It’s good practice to register the script first before enqueuing it, as this allows others to deregister your script via the handle in their own plugins, without modifying the core code of your plugin. In addition to this, if the handle of a registered script is listed in the array of dependencies of another script that has been enqueued, that script will automatically be loaded prior to loading that highlighted enqueued script.

Common Mistake #3: Avoiding Child Themes and Modifying WordPress Core Files

Always create a child theme if you plan on modifying a theme. Some developers will make changes to the parent theme files only to discover after an upgrade to the theme that their changes have been overwritten and lost forever.

To create a child theme, place a style.css file in a subdirectory of the child theme’s folder, with the following content:

 Theme Name:   Twenty Sixteen Child
 Theme URI:
 Description:  Twenty Fifteen Child Theme
 Author:       John Doe
 Author URI:
 Template:     twentysixteen
 Version:      1.0.0
 License:      GNU General Public License v2 or later
 License URI:
 Tags:         light, dark, two-columns, right-sidebar, responsive-layout, accessibility-ready
 Text Domain:  twenty-sixteen-child

The above example creates a child theme based on the default WordPress theme, Twenty Sixteen. The most important line of this code is the one containing the word “Template” which must match the directory name of the parent theme you are cloning the child from.

The same principles apply to WordPress core files: Don’t take the easy route by modifying the core files. Put in that extra bit of effort by employing WordPress pluggable functions and filters to prevent your changes from being overwritten after a WordPress upgrade. Pluggable functions let you override some core functions, but this method is slowly being phased out and replaced with filters. Filters achieve the same end result and are inserted at the end of WordPress functions to allow their output to be modified. A trick is always to wrap your functions with if ( !function_exists() ) when using pluggable functions since multiple plugins trying to override the same pluggable function without this wrapper will produce a fatal error.

Common Mistake #4: Hardcoding Values

Often it looks quicker to just hardcode a value (such as a URL) somewhere in the code, but the time spent down the road debugging and rectifying issues that arise as a result of this is far greater. By using the corresponding function to generate the desired output dynamically, we greatly simplify subsequent maintenance and debugging of our code. For example, if you migrate your site from a test environment to production with hardcoded URLs, all of a sudden you’ll notice your site it is not working. This is why we should employ functions, like the one listed below, for generating file paths and links:

// Get child theme directory uri
//  Get parent theme directory
// Retrieves url for the current site

Another bad example of hardcoding is when writing custom queries. For example, as a security measure, we change the default WordPress datatable prefix from wp_ to something a little more unique, like wp743_. Our queries will fail if we ever move the WordPress install, as the table prefixes can change between environments. To prevent this from happening, we can reference the table properties of the wpdb class:

global $wpdb;
$user_count = $wpdb->get_var( "SELECT COUNT(*) FROM $wpdb->users" );

Notice how I am not using the value wp_users for the table name, but instead, I’m letting WordPress work it out. Using these properties for generating the table names will help ensure that we return the correct results.

Common Mistake #5: Not Stopping Your Site From Being Indexed

Why wouldn’t I want search engines to index my site? Indexing is good, right? Well, when building a website, you don’t want search engines to index your site until you have finished building it and have established a permalink structure. Furthermore, if you have a staging server where you test site upgrades, you don’t want search engines like Google indexing these duplicate pages. When there are multiple pieces of indistinguishable content, it is difficult for search engines to decide which version is more relevant to a search query. Search engines will in such cases penalize sites with duplicate content, and your site will suffer in search rankings as a result of this.

As shown below, WordPress Reading Settings has a checkbox that reads “Discourage search engines from indexing this site”, although this does have an important-to-note underneath stating that “It is up to search engines to honor this request”.

WordPress Reading Settings

Bear in mind that search engines often do not honor this request. Therefore, if you want to reliably prevent search engines from indexing your site, edit your .htaccess file and insert the following line:

Header set X-Robots-Tag "noindex, nofollow"

Common Mistake #6: Not Checking if a Plugin is Active

Why should I check if a plugin function exists if my plugin is always switched on? For sure, 99% of the time your plugin will be active. However, what about that 1% of the time when for some reason it has been deactivated? If and when this occurs, your website will probably display some ugly PHP errors. To prevent this, we can check to see if the plugin is active before we call its functions. If the plugin function is being called via the front-end, we need to include the plugin.php library in order to call the function is_plugin_active():

include_once( ABSPATH . 'wp-admin/includes/plugin.php' );
if ( is_plugin_active( 'plugin-folder/plugin-main-file.php' ) ) {
// Run plugin code

This technique is usually quite reliable. However, there could be instances where the author has changed the main plugin directory name. A more robust method would be to check for the existence of a class in the plugin:

if( class_exists( ‘WooCommerce’ ) ) {
	// The plugin WooCommerce is turned on

Authors are less likely to change the name of a plugin’s class, so I would generally recommend using this method.

Common Mistake #7: Loading Too Many Resources

Why should we be selective in loading plugin resources for pages? There is no valid reason to load styles and scripts for a plugin if that plugin is not used on the page that the user has navigated to. By only loading plugin files when necessary, we can reduce our page loading time, which will result in an improved end user experience. Take, for example, a WooCommerce site, where we only want the plugin to be loaded on our shopping pages. In such a case, we can selectively remove any files from being loaded on all the other sites pages to reduce bloating. We can add the following code to the theme or plugin’s functions.php file:

function load_woo_scripts_styles(){
if( function_exists( 'is_woocommerce' ) ){
    // Only load styles/scripts on Woocommerce pages   
	if(! is_woocommerce() && ! is_cart() && ! is_checkout() ) { 		
		// Dequeue scripts.
		// Dequeue styles.	

add_action( 'wp_enqueue_scripts', 'load_woo_scripts_styles');

Scripts can be removed with the function wp_dequeue_script($handle) via the handle with which they were registered. Similarly, wp_dequeue_style($handle) will prevent stylesheets from being loaded. However, if this is too challenging for you to implement, you can install the Plugin Organizer that provides the ability to load plugins selectively based on certain criteria, such as a post type or page name. It’s a good idea to disable any caching plugins, like W3Cache, that you may have switched on to stop you from having to refresh the cache constantly to reflect any changes you have made.

Common Mistake #8: Keeping the Admin Bar

Can’t I just leave the WordPress Admin Bar visible for everyone? Well, yes, you could allow your users access to the admin pages. However, these pages very often do not visually integrate with your chosen theme and don’t provide a seamless integration. If you want your site to look professional, you should disable the Admin Bar and provide a front-end account management page of your own:

add_action('after_setup_theme', 'remove_admin_bar');

function remove_admin_bar() {
if (!current_user_can('administrator') && !is_admin()) {

The above code, when copied into your theme’s functions.php file will only display the Admin Bar for administrators of the site. You can add any of the WordPress user roles or capabilities into the current_user_can($capability) function to exclude users from seeing the admin bar.

Common Mistake #9: Not Utilizing the GetText Filter

I can use CSS or JavaScript to change the label of a button, what’s wrong with that? Well, yes, you can. However, you’re adding superfluous code and extra time to render the button, when you can instead use one of the handiest filters in WordPress, called gettext. In conjunction with a plugin’s textdomain, a unique identifier that ensures WordPress can distinguish between all loaded translations, we can employ the gettextfilter to modify the text before the page is rendered. If you search the source code for the function load_plugin_textdomain($domain), it will give you the domain name we need to override the text in question. Any reputable plugin will ensure that the textdomain for a plugin is set on initialization of the plugin. If it’s some text in a theme that you want to change, search for the load_theme_textdomain($domain) line of code. Using WooCommerce once again as an example, we can change the text that appears for the “Related Products” heading. Insert the following code into your theme’s functions.php file:

function translate_string( $translated_text, $untranslated_text, $domain ) {
	if ( $translated_text == 'Related Products') {
			$translated_text = __( 'Other Great Products', 'woocommerce' );
	return $translated_text;

add_filter( 'gettext', 'translate_string', 15, 3 );

This filter hook is applied to the translated text by the internationalization functions __() and _e(), as long as the textdomain is set via the aforementioned functions.

_e( 'Related Products', 'woocommerce' );

Search your plugins for these internationalization functions to see what other strings you can customize.

By default, WordPress uses a query string with the post’s ID to return the specified content. However, this is not user-friendly and users may remove pertinent parts of the URL when copying it. More importantly, these default permalinks do not use a search engine friendly structure. Enabling what we call “pretty” permalinks will ensure our URLs contain relevant keywords from the post title to improve performance in search engine rankings. It can be quite a daunting task having to retrospectively modify your permalinks, especially if your site has been running for a significant period of time, and you’ve got hundreds of posts already indexed by search engines. So after you’ve installed WordPress, ensure you promptly change your permalinks structure to something a little more search engine friendly than just a post ID. I generally use the post name for the majority of sites I build, but you can customize the permalink to whatever format you like using the availablepermalink structure tags.

WordPress Permalink Settings


This article is by no means an exhaustive list of mistakes made by WordPress developers. If there’s one thing you should take away from this article, though, it’s that you should never take shortcuts (and that’s true in any development platform, not just in WordPress!). Time saved now by poor programming practices will come back to haunt you later. Feel free to share with us some mistakes that you have made in the past – and more importantly any lessons learned – by leaving a comment below.

Source: Toptal

Guide to Showcasing Sketch and Photoshop Skills in Your Portfolio

Both Sketch and Photoshop are great tools used by almost every designer to accomplish a huge variety of tasks. To Photoshop has even become a dictionary verb. It doesn’t come as a surprise that most clients will expect a designer to have a high level of Sketch and Photoshop expertise. The majority of Toptal design jobs have either Sketch or Photoshop listed as one of their main required software. All of this is probably making you want to demonstrate your Sketch and Photoshop mastery throughout your portfolio.

Before proceeding, keep in mind that both Sketch and Photoshop are just tools and although tools do not make great designers, being a master of the tool gives the ability to execute your ideas professionally and efficiently.

So, how do you showcase that you are a Sketch or Photoshop expert in your portfolio? It mostly depends on the kind of design work you mainly use either program for.

You do visuals, photo manipulation and illustration

If the focus of your design work is in the creation of visuals, illustration, photo manipulation and photo editing in Photoshop, you’ll want that to shine from your portfolio. When deciding which projects to showcase in the portfolio be sure to choose only your best work and try not to be repetitive. There might be some clients that fall in love with your unique style but often clients prefer designers who can adapt to different styles and trends.

Choose work that demonstrates your mastery in detailed visual compositions, combining various layers, masks and advanced blending and some other qualities that demonstrate your proficiency with using light and shadow. Show that you understand perspective. Include an example that illustrates how immaculately you manage colors. In addition to showing complete visuals or illustrations put some emphasis on perfectly crafted details and make a few close-ups of the most interesting details that really demonstrate your perfection. Share your work process in the portfolio, give some sketches, display how raw materials looked like and what you’ve accomplished to make out of them. If it’s appropriate to showcase photo editing skills, put in some before and after the visuals.

You are the branding expert

While developing the visual identity as a part of a branding project you preferably won’t use Photoshop as your main tool of choice but one of the vector tools such as Illustrator. However, Photoshop will come in handy to visualize how that identity (logo, chosen colors palette, and typography) will work and look on stationary, signage, visual identity guidelines, website, apps and other additional advertising materials.

To showcase your branding project at its best, the first step you’ll need to do is to find or make some 3D mockup templates. Be careful to choose ones that won’t interfere with work that you are primarily showing, but instead, choose ones that will put emphasis on its best features. Avoid weird perspectives, too many distractions in the form of surrounding objects, colors, patterns.

Remember that you are showcasing your branding capabilities to prospective clients and not trying to sell them good looking mockups, especially if you haven’t made them by yourself. If you are buying or using some free templates be sure they are of quality. When applying your work inside a mockup, give attention to details, align everything perfectly, take care that there are no pixels hanging around.

Double check that you are putting your pages or screens in the right perspective, that lighting, white balance and shades are all adjusted and that nothing looks pixelated or distorted. Keep in mind that the scene you are building must look like a real one and although it might not be noticed at the first glance some inconsistencies could signify to a potential client that you are not giving enough attention to details or that you are not so versed in Photoshop.

You are a web or UI designer

Photoshop was not developed for web and user interface visual design, but since no completely corresponding tool existed at the time, most web designers were using it as their primary tool. With the adoption of responsive design and the arrival of more appropriate tools and workflows developed specifically for web and user interface design, Photoshop lost its web design tool throne. There are still some designers, especially those not working on Macs that use Photoshop, but Sketch is now the leader in the field.

If you are working as a web or user interface designer, no matter which tool you use you’ll want to show your proficiency and effectiveness in it and that could hardly be accomplished without revealing your process. High-quality visuals can be produced even if you are not a master of your tools, but glancing through your work files and workflow can show potential clients and collaborators that you are one. That is the reason for showing and describing in your portfolio how you use grids, artboards, structure your layers, and deal with Sketch symbols or Adobe CC libraries, handle typography and styles. Show some close-ups that will place emphasis on your attention to detail. If you craft your pixel-perfect icons and other elements in Sketch, display them with pride.

When choosing mockups in which to present web design or UI work stick with ones that won’t interfere with your designs. Let them be clear, without any unnecessary clutter. If using 3D perspective views, be sure that your work, which is core content of your portfolio, is shown in a way all important components are visible and understandable and there are no perspective distortions.

No matter what, take care of this

If you claim to be a Sketch or Photoshop expert, be sure that all your portfolio projects and presentations look professional. Some minor details, like the wrong direction of a shadow, or any pixelation might show a well-trained eye that your design skills are weak or that you lack the ability to polish your projects up to the last detail.

Be sure that all pictures you are putting in a portfolio are sharp and that nothing is pixelated, posterized or distorted. All elements in photomontages should blend seamlessly, and perspectives of different elements must be aligned and lighting effects, shadows and white balance in compliance.

Remember also that although the presentation of projects in your portfolio is very important, and can be a good means of showing your Photoshop skills, don’t let it become more important than the work itself. If by looking at your portfolio, one is more aware of the presentation than the content, be sure that something went wrong and reconsider rebuilding the portfolio around your best projects.

Source: Toptal

Guide to Building a Top UI Design Portfolio

Writer’s Note: This is the 2nd of a series of portfolio guides that aims to help those among our readers with the skill set that is featured.

Before We Begin

Professionals who work in the creative industry need portfolios to showcase their skills to attract clients and peers. Once upon a time this was solved by creating stunning printed pieces. However, no matter how you look at it, times have changed and designers are no longer just designers. We’ve got different specialties that cover many different fields within design. It’s important that you identify your strengths before starting to build your own portfolio.

Today we will cover all the bases that lead to the creation of an amazing User Interface Portfolio, so if this happens to be your specialization, keep reading!

Quality and Quantity

Take the creation of your portfolio as any other important project you would work on and start by picking the number of products or projects you would like to showcase. Think of a number that can cover all of what you can do from the point of view of a UI designer, that can be enough to represent you as the perfect candidate for the next big contract and not a lot to turn your portfolio into an overwhelming and never ending trip for your future clients. Edit your selection with a sharp eye, as you will be judged by your worst piece.

Picking up to 9 projects is more than enough to show a variety of pieces, however, if that would be too many for what you would like to show in your portfolio then don’t worry, as 6 is also an acceptable number of projects to offer.

We all know working on a portfolio can feel endless because it’s hard for us as designers to objectively select the best work. However, the sooner you publish your portfolio, the faster your work will be ready for potential clients to see. Set realistic deadlines for every step of the process: from the very beginning, to when you pick your projects, through to its publication.

What About the Target?

This will mainly depend on you: are you a UI designer focusing on gaming? How about a UI designer specialized in designing mobile apps? Maybe you do both plus more! Each of them has a different solution but these tips are applicable to all UI cases.


A little research has never truly hurt anyone and it’s useful to see what kinds of portfolios are out there, what trends you should avoid for your portfolio to not look the same as everyone else’s, and what details are definitely worth exploring to apply to your own presentation. Inspiration is your best friend when you’re starting to build something from scratch.

Awwwards is a good place to look for web-based portfolios and some users at Dribbble offer more on their profiles to a web portfolio than you might think.

Of course, learning from your fellow colleagues on Toptal is always a good idea, there are stunning portfolios out there for you to check out!

The Three Pillars

There are three things that should be kept in mind throughout the process of building a UI portfolio: remember the importance of the visuals, have a solid process and show the result of each project by telling a story. Be as specific as you want yet keep an equilibrium between all three of them.

While it’s important to pay attention to details and UI designers focus mainly on those, it’s vital for your pieces to be “more than just a pretty face.” UI designers mostly work with UX designers to achieve incredible products, or sometimes hybrid designers do both UX and UI at the same time. Therefore it’s key to keep the essence of your designs by having some storytelling on every single page and by dodging the commonly known “Dribbblisation of Design” which will differentiate you from regular designers.

Layouts & Styles

The recommended size for portfolios nowadays are:

  • A4 Horizontal, the width will benefit the amount of content you can show
  • Sizes that are always larger in width and no smaller than 1280x800px

Note: Most devices nowadays allow retina images which will make your images look sharper and better. Remember to upload them twice their original size with @2x.

When thinking of the kind of layout you should design for each product, keep in mind that most of your projects will be different and will have a particular style that makes them unique: this will help you with the previously mentioned storytelling. Start from beginning to end, or backward; the possibilities are endless as long as you keep coherence on every single page.

Think of the most eye-catching cover page for each project. Whether it’s the logo of the product with a color background, a mobile product displayed in a beautiful mockup, or the interface of a video game close up, all of them can work as long as you keep the visual noise to a minimum. Clients have only a few minutes per page to spare on your portfolio so it’s important to show and tell as much as possible in a clean and organized way.

Don’t be afraid to put two or three dispositive together for a cover page as it will show how adaptable and dynamic your product is and will also tell the client beforehand how much content they can expect from a project.

Be Meticulous

We live and breathe visuals so we can’t afford to have pixelated rounded corners on a mockup, different screen sizes or slightly different alignments for the same product.

Keep in mind:

  • The alignment of your mockups or screens should be the same as not to generate a slight jump between one page and another. Make sure to check alignments on Y and X.
  • Work with vector images. If you’re using Sketch, it’s quite handy to have mockups that can be scalable and will never look pixilated; use the “scale” option instead of manually scaling your mockup, as it will lose its shape. If you happen to be using Photoshop, on the other hand, scale your mockup and use Command + Z (or Control + Z on a PC) to go back and scale again, as every time you scale your image it will get more and more pixelated.
  • Check for details once you’re done with the general alignment of your objects by working with zoom. This will help you discard any lines or shapes that are slightly out of place.
  • If you’re using mockups for mobile or tablets, there are two ways to go regarding the top bar: if you wish to keep it, make sure the battery is on a 100% charge and that the carrier shows a real company, for example, AT&T, T-Mobile, Virgin, among others, because it will give a realistic touch to your product. If you wish to take the top bar away, mobile products usually look better in a rounded container with 2px of radius, without a mockup.
  • The background should always highlight the product you’re trying to show and not turn the client’s view away from it. There are two ways to go about this: 1. either use a plain color background that can make a friendly contrast with your product (keep in mind the mock up’s color and the color scheme of your design altogether) or 2. use a pattern or picture as background but get creative with its opacity and/or add a color layer on top with some transparency. Once again, the options are endless as long as the background is always secondary.
  • For web pages or landing pages, you can go ahead and divide them into three pages to allow for a smooth tour through each portion. Making it small and placing them into a single page would make the client miss key points and details that will differentiate your product from others.

The Process

It doesn’t matter whether you do UX, UI or a complete different specialization within design: it’s always important to show that your work had a process and that it didn’t just magically appear. Don’t be shy to include rough sketches, the good old technique of paper and pencil, collage or even photography that could have helped you in the thought process of building outstanding UI for your product.

Depending how you want to go to your portfolio, there are different ways and techniques to show these sketches:

  • The simplest method is to scan your sketches and make good use of Photoshop to handle levels, contrast, and brightness before using them in the correct size (not too big nor too small). Depending what you want to show with these sketches, they can all be on the same page spread everywhere or more organized, selecting just a few of the most polished ones.
  • If you got inspired by particular objects, taking photographs from above at a 90° angle would show the object in a real size and it’s a trend that’s been quite useful as of lately (be careful of any shadows over your object!). If your object isn’t as flattering at that angle, however, using non-conventional angles like diagonals could help give the photograph more movement.
  • Other tips regarding photography: 1. make sure the photograph is not blurry and that there aren’t other items creating noise or disturbing the general picture, and 2. consider properly cutting those objects and placing them over solid color backgrounds or alternatively create a scenario that serves as context. For both cases, do check contrasts, brightness, and levels as we don’t want it to be too bright or too dark.
  • Collages, paintings or experimentation over a paper with different items like brushes, pens or watercolor pencils can also be scanned or photographed. It mostly depends on what is important to show for each project, and what experiences are important for our client to have when they’re taking a look at those pages.


This is your work process and the way you show it will depend on what kind of projects you’ve worked on. If your main focus is iconography, showing rough sketches and a step-by-step process through to the final form are recommended. If you’re focusing on mobile products instead, screens that are connected to one another to show a feature can also tell a story, and initial sketches of the interface itself are always helpful as well.

Consistency and coherence are important to tell a story no matter how you want to show it. And even though each product will have its own unique style there is a rhythm that will guide your client’s eyes through each page.


To summarize everything, remember to:

  1. Keep in mind your target which will probably depend on your specialization as a UI Designer.
  2. Pick reasonable numbers of pages for your portfolio that can showcase the kind of professional you are.
  3. Do some benchmark; research has never hurt anyone.
  4. Set realistic deadlines, and treat your portfolio building as another project.
  5. Whatever you do, don’t forget about visuals, including written details and your work process. If there’s something a UI Designer can stand out in, it is by being quite meticulous with details.
  6. You live and breathe visuals but storytelling is just as important to differentiate you from regular designers who fall into the “Dribbblisation of Design” category.
  7. Be coherent and consistent with your style through every part of your portfolio.

Last but not least: have fun! Your portfolio, whether UI, UX or any other kind, should show not only how capable you are as a professional but also part of your personality and that you have a unique voice and style to offer.

Source: Toptal

Guide to Building a Top Web Design Portfolio

Writer’s Note: This is the first of a series of portfolio guides that aims to help those among our readers with the skill set that is featured.

A portfolio is a very important link between a designer and a client. It aims to impress a potential client by showing the designer’s work and skills. At Toptal, we screen a lot of web designers and review a lot of portfolios. Creating a top web design portfolio is by no means easy, even for experienced designers. We’re sharing our tips to help you create a top portfolio.

1. Content Is King

Most web designers are no strangers to the concept of content first. Content is king in web design, so why not apply that same concept to your portfolio? Make content the star of the show and focus on the quality of the message you are trying to get across. Try to avoid eye candy in the images you use and concentrate on engaging potential customers through the statement you are making. This is not to say you should neglect the images — after all, they will without a doubt attract clients and open a few doors — but the copy is likely to make you the ideal candidate for a job. Without great copy, there’s no top portfolio. As a result, you might easily appear less professional and the client could choose a different designer. Well-written content is your best chance to communicate your skills and expertise and sell your work to a future employer.

2. Take Your Target Audience Into Account

Another well-known web design strategy is not to think of yourself (the web designer) as the user. As you would with a web design project, think of your target audience and their wants, needs, and possible limitations. Put yourself into the shoes of the people who will be viewing your portfolio, find pain points and fix them. Help them understand the message you are sending.

Remember that a portfolio is about projects, so aim to find the right balance and remove everything that gets in a way of a clear, concise message. The goal of a portfolio is to showcase your work to potential clients and impress them. They need to find a quick and easy path to the information they want, so think of a way to provide just that.

3. Tell a Story

Engage potential clients by telling a story. For instance, explaining the process behind a project could come a long way. Showcase not only a finished product but also the way you solve real problems. This will help clients appreciate the time and effort invested behind the scenes and get to know you as a web designer. Explain your role in the project and mention the techniques and technologies used to demonstrate the value of your work. Your skills should be reflected in the images you provide.

If you were a member of a team, mention and promote the success of the entire team and the project, not only your role. Are there some detailed UI problems you solved which you can share? What deliverables were produced and why? Which of the major KPIs can be used to demonstrate project goals and success? Was there a part of the project that was not a success and why was that the case? Try to be objective and honest — not every step of the project is without flaws and no web designer is error free. Honesty might just be the best policy and it might impress clients. While you could do all this in a Skype meeting with a potential employer, why not save yours and their time and tell a story in your portfolio? It’s a definite win-win situation.

4. Don’t Make Your Clients Think

“Don’t make me think” by Steve Krug is one of the most famous web design books and, generally speaking, lessons in web design. Avoid being vague to let clients accomplish their task without hitting roadblocks. Make sure your work, as well as personal and contact information is easy to understand and digest. Present goals, results, and features in a direct and concise, intuitive fashion. If your project is live, make sure to provide a link to the website and let the client discover more. The browser is the natural environment for any website, so it only makes sense to let clients view your project in it. If the project is not online, maybe you can provide a link to a detailed case study, a front-end prototype, or a style guide. This might be your only opportunity to make a lasting impression, so invest extra effort.

5. Be Professional

The final tip may be obvious, but is by no means insignificant: be professional in your presentation. Assure clients you are not willing to gamble with the quality of their projects.

There is a number of ways you can do this. Here are a few:

  • Use spell-check software to avoid spelling errors and come off superficial.
  • Consider specifying the start and end dates to provide additional information and add to the credibility.
  • Optimize images without sacrificing quality — no-one wants to see pixelated images, but no-one wants to wait for them to load, either. After all, we’re web designers and therefore no strangers to image optimizations.
  • Be honest when stating your work experience and job title.
  • Give credit where credit is due. If other agencies and team members were involved in a project, mention them and their role.
  • Select only your strongest portfolio pieces — quality will always win over quantity and you may well be judged by your weakest work.
  • If the project was a success, ask the client for a testimonial and add it to your portfolio.
  • Ask peers for a review to find ways of improving your portfolio.
  • Much like any website, your portfolio is never finished, so remember to update it regularly and keep improving it.

This wraps up our tips for creating a top web design portfolio.

Source: Toptal.

10 Essential WordPress Interview Questions

  1. Consider the following code snippet. Briefly explain what changes it will achieve, who can and cannot view its effects, and at what URL WordPress will make it available.
add_action('admin_menu', 'custom_menu');

function custom_menu(){
    add_menu_page('Custom Menu', 'Custom Menu', 'manage_options', 'custom-menu-slug', 'custom_menu_page_display');

function custom_menu_page_display(){
    echo '<h1>Hello World</h1>';
    echo '<p>This is a custom page</p>';

With default settings and roles, admins can view it and all lower roles can’t. In fact this menu item will only be visible to users who have the privilege to “manage options” or change settings from WordPress admin dashboard.

The admin custom page will be made available at this (relative) URL: “?page=custom-menu-slug”.

2. How would you change all the occurrences of “Hello” into “Good Morning” in post/page contents, when viewed before 11AM?

In a plugin or in theme functions file, we must create a function that takes text as input, changes it as needed, and returns it. This function must be added as a filter for “the_content”.

It’s important that we put a little effort to address some details:

  • Only change when we have the full isolate substring “hello”. This will prevent words like “Schellong” from becoming “sgood morningng”. To do that we must use “word boundary” anchors in regular expression, putting the word between a pair of “\b”.
  • Keep consistency with the letter case. An easy way to do that is to make the replace case sensitive.
function replace_hello($the_content){
        $the_content=preg_replace('/\bhello\b/','good morning',$the_content);
        $the_content=preg_replace('/\bHello\b/','Good Morning',$the_content);
    return $the_content;
add_filter('the_content', 'replace_hello');

3. What is the $wpdb variable in WordPress, and how can you use it to improve the following code?

function perform_database_action(){
    mysql_query(“INSERT into table_name (col1, col2, col3) VALUES ('$value1','$value2', '$value3');

$wpdb is a global variable that contains the WordPress database object. It can be used to perform custom database actions on the WordPress database. It provides the safest means for interacting with the WordPress database.

The code above doesn’t follow WordPress best practices which strongly discourages the use of any mysql_query call. WordPress provides easier and safer solutions through $wpdb.

The above code can be modified to be as follows:

function perform_database_action(){
    global $wpdb;
    $data= array('col1'=>$value1,'col2'=>$value2,'col3'=>$value3);
    $format = array('%s','%s','%s');
    $wpdb->insert('table_name', $data, $format);
function add_custom_script(){
        plugin_dir_url( __FILE__ ).'js/jquery-custom-script.js'

wp_enqueue_script is usually used to inject javascript files in HTML.

The script we are trying to queue will not be added, because “add_custom_script()” is called with no hooks. To make this work properly we must use the wp_enqueue_scripts hook. Some other hooks will also work such as init, wp_print_scripts, and wp_head.

Furthermore, since the script seems to be dependent on jQuery, it’s recommended to declare it as such by adding array(‘jquery’) as the 3rd parameter.

Proper use:

add_action(‘wp_enqueue_scripts’, ‘add_custom_script’);
function add_custom_script(){
        plugin_dir_url( __FILE__ ).'js/jquery-custom-script.js',
        array( 'jquery')

5. Assuming we have a file named “wp-content/plugins/hello-world.php” with the following content. What is this missing to be called a plugin and run properly?

add_filter('the_content', 'hello_world');
function hello_world($content){
    return $content . "<h1> Hello World </h1>";

The file is missing the plugin headers. Every plugin should include at least the plugin name in the header with the following syntax:

Plugin Name: My hello world plugin

6. What is a potential problem in the following snippet of code from a WordPress theme file named “footer.php”?

        </section><!—end of body section- ->
        <footer>All rights reserved</footer>

All footer files must call the <?php wp_footer() ?> function, ideally right before the </body> tag. This will insert references to all scripts and stylesheets that have been added by plugins, themes, and WordPress itself to the footer.

7. What is this code for? How can the end user use it?

function new_shortcode($atts, $content = null) {
        “type” => “warning”
    ), $atts));
    return '
'; } add_shortcode(“warning_box”, “new_shortcode”);

This shortcode allows authors to show an info box in posts or pages where the shortcode itself is added. The HTML code generated is a div with a class name “alert” plus an extra class name by default, “alert-warning”. A parameter can change this second class to change the visual aspect of the alert box.

Those class naming structures are compatible with Bootstrap.

To use this shortcode, the user has to insert the following code within the body of a post or a page:

[warning_box]Warning message[/warning_box]

8. Is WordPress safe from brute force login attempts? If not, how can you prevent such an attack vector?

No, WordPress on its own is vulnerable to brute force login attempts.

Some good examples of actions performed to protect a WordPress installation against brute force are:

  • Do not use the “admin” username, and use strong passwords.
  • Password protect “wp-login.php”.
  • Set up some server-side protections (IP-based restrictions, firewall, Apache/Nginx modules, etc.)
  • Install a plugin to add a captcha, or limit login attempts.

9. The following line is in a function inside a theme’s “function.php” file. What is wrong with this line of code?

wp_enqueue_script('custom-script', '/js/functions.js');

Assuming that “functions.js” file is in the theme’s “js/” folder, we should use ‘get_template_directory_uri()’. '/js/functions.js' or the visitors’ browser will look for the file in the root directory of the website.

10. Suppose you have a non-WordPress PHP website with a WordPress instance in the “/blog/” folder. How can you show a list of the last 3 posts in your non-WordPress pages?

One obvious way is to download, parse, and cache the blog’s RSS feeds. However, since the blog and the website are on the same server, you can use all the WordPress power, even outside it.

The first thing to do is to include the “wp-load.php” file. After which you will be able to perform any WP_Query and use any WordPress function such as get_posts, wp_get_recent_posts, query_posts, and so on.

<h2>Recent Posts</h2>
    $recent_posts = wp_get_recent_posts(array(‘numberposts’=>3));
    foreach($recent_posts as $recent){
        echo '<li><a href="' . get_permalink($recent["ID"]) . '">' . $recent["post_title"] . '</a></li> ';

Source: Toptal

Writing Tests That Matter: Tackle The Most Complex Code First

There are a lot of discussions, articles, and blogs around the topic of code quality. People say – use Test Driven techniques! Tests are a “must have” to start any refactoring! That’s all cool, but it’s 2016 and there is a massive volume of products and code bases still in production that were created ten, fifteen, or even twenty years ago. It’s no secret that a lot of them have legacy code with low test coverage.

While I’d like to be always at the leading, or even bleeding, edge of the technology world – engaged with new cool projects and technologies – unfortunately it’s not always possible and often I have to deal with old systems. I like to say that when you develop from scratch, you act as a creator, mastering new matter. But when you’re working on legacy code, you’re more like a surgeon – you know how the system works in general, but you never know for sure whether the patient will survive your “operation”. And since it’s legacy code, there are not many up to date tests for you to rely on. This means that very frequently one of the very first steps is to cover it with tests. More precisely, not merely to provide coverage, but to develop a test coverage strategy.

Coupling and Cyclomatic Complexity: Metrics for Smarter Test Coverage

Forget 100% coverage. Test smarter by identifying classes that are more likely to break.

Basically, what I needed to determine was what parts (classes / packages) of the system we needed to cover with tests in the first place, where we needed unit tests, where integration tests would be more helpful etc. There are admittedly many ways to approach this type of analysis and the one that I’ve used may not be the best, but it’s kind of an automatic approach. Once my approach is implemented, it takes minimal time to actually do the analysis itself and, what is more important, it brings some fun into legacy code analysis.

The main idea here is to analyse two metrics – coupling (i.e., afferent coupling, or CA) and complexity (i.e. cyclomatic complexity).

The first one measures how many classes use our class, so it basically tells us how close a particular class is to the heart of the system; the more classes there are that use our class, the more important it is to cover it with tests.

On the other hand, if a class is very simple (e.g. contains only constants), then even if it’s used by many other parts of the system, it’s not nearly as important to create a test for. Here is where the second metric can help. If a class contains a lot of logic, the Cyclomatic complexity will be high.

The same logic can also be applied in reverse; i.e., even if a class is not used by many classes and represents just one particular use case, it still makes sense to cover it with tests if its internal logic is complex.

There is one caveat though: let’s say we have two classes – one with the CA 100 and complexity 2 and the other one with the CA 60 and complexity 20. Even though the sum of the metrics is higher for the first one we should definitely cover the second one first. This is because the first class is being used by a lot of other classes, but is not very complex. On the other hand, the second class is also being used by a lot of other classes but is relatively more complex than the first class.

To summarize: we need to identify classes with high CA and Cyclomatic complexity. In mathematical terms, a fitness function is needed that can be used as a rating – f(CA,Complexity) – whose values increase along with CA and Complexity.

Generally speaking, the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.

Finding tools to calculate CA and Complexity for the whole code base, and provide a simple way to extract this information in CSV format, proved to be a challenge. During my search, I came across two tools that are free so it would be unfair not to mention them:

A Bit Of Math

The main problem here is that we have two criteria – CA and Cyclomatic complexity – so we need to combine them and convert into one scalar value. If we had a slightly different task – e.g., to find a class with the worst combination of our criteria – we would have a classical multi-objective optimization problem:

We would need to find a point on the so called Pareto front (red in the picture above). What is interesting about the Pareto set is that every point in the set is a solution to the optimization task. Whenever we move along the red line we need to make a compromise between our criteria – if one gets better the other one gets worse. This is called Scalarization and the final result depends on how we do it.

There are a lot of techniques that we can use here. Each has its own pros and cons. However, the most popular ones are linear scalarizing and the one based on an reference point. Linear is the easiest one. Our fitness function will look like a linear combination of CA and Complexity:

f(CA, Complexity) = A×CA + B×Complexity

where A and B are some coefficients.

The point which represents a solution to our optimization problem will lie on the line (blue in the picture below). More precisely, it will be at the intersection of the blue line and red Pareto front. Our original problem is not exactly an optimization problem. Rather, we need to create a ranking function. Let’s consider two values of our ranking function, basically two values in our Rank column:

R1 = A∗CA + B∗Complexity and R2 = A∗CA + B∗Complexity

Both of the formulas written above are equations of lines, moreover these lines are parallel. Taking more rank values into consideration we’ll get more lines and therefore more points where the Pareto line intersects with the (dotted) blue lines. These points will be classes corresponding to a particular rank value.

Unfortunately, there is an issue with this approach. For any line (Rank value), we’ll have points with very small CA and very big Complexity (and visa versa) lying on it. This immediately puts points with a big difference between metric values in the top of the list which is exactly what we wanted to avoid.

The other way to do the scalarizing is based on the reference point. Reference point is a point with the maximum values of both criteria:

(max(CA), max(Complexity))

The fitness function will be the distance between the Reference point and the data points:

f(CA,Complexity) = √((CA−CA )2 + (Complexity−Complexity)2)

We can think about this fitness function as a circle with the center at the reference point. The radius in this case is the value of the Rank. The solution to the optimization problem will be the point where the circle touches the Pareto front. The solution to the original problem will be sets of points corresponding to the different circle radii as shown in the following picture (parts of circles for different ranks are shown as dotted blue curves):

This approach deals better with extreme values but there are still two issues: First – I’d like to have more points near the reference points to better overcome the problem that we’ve faced with linear combination. Second – CA and Cyclomatic complexity are inherently different and have different values set, so we need to normalize them (e.g. so that all the values of both metrics would be from 1 to 100).

Here is a small trick that we can apply to solve the first issue – instead of looking at the CA and Cyclomatic Complexity, we can look at their inverted values. The reference point in this case will be (0,0). To solve the second issue, we can just normalize metrics using minimum value. Here is how it looks:

Inverted and normalized complexity – NormComplexity:

(1 + min(Complexity)) / (1 + Complexity)∗100

Inverted and normalized CA – NormCA:

(1 + min(CA)) / (1+CA)∗100

Note: I added 1 to make sure that there is no division by 0.

The following picture shows a plot with the inverted values:

Final Ranking

We are now coming to the last step – calculating the rank. As mentioned, I’m using the reference point method, so the only thing that we need to do is to calculate the length of the vector, normalize it, and make it ascend with the importance of a unit test creation for a class. Here is the final formula:

Rank(NormComplexity , NormCA) = 100 − √(NormComplexity2 + NormCA2) / √2

More Statistics

There is one more thought that I’d like to add, but let’s first have a look at some statistics. Here is a histogram of the Coupling metrics:

What is interesting about this picture is the number of classes with low CA (0-2). Classes with CA 0 are either not used at all or are top level services. These represent API endpoints, so it’s fine that we have a lot of them. But classes with CA 1 are the ones that are directly used by the endpoints and we have more of these classes than endpoints. What does this mean from architecture / design perspective?

In general, it means that we have a kind of script oriented approach – we script every business case separately (we can’t really reuse the code as business cases are too diverse). If that is the case, then it’s definitely a code smell and we need to do refactoring. Otherwise, it means the cohesion of our system is low, in which case we also need refactoring, but architectural refactoring this time.

Additional useful information we can get from the histogram above is that we can completely filter out classes with low coupling (CA in {0,1}) from the list of the classes eligible for coverage with unit tests. The same classes, though, are good candidates for the integration / functional tests.

You can find all the scripts and resources that I have used in this GitHub repository: ashalitkin/code-base-stats.

Does It Always Work?

Not necessarily. First of all it’s all about static analysis, not runtime. If a class is linked from many other classes it can be a sign that it’s heavily used, but it’s not always true. For example, we don’t know whether the functionality is really heavily used by end users. Second, if the design and the quality of the system is good enough, then most likely different parts / layers of it are decoupled via interfaces so static analysis of the CA will not give us a true picture. I guess it’s one of the main reasons why CA is not that popular in tools like Sonar. Fortunately, it’s totally fine for us since, if you remember, we are interested in applying this specifically to old ugly code bases.

In general, I’d say that runtime analysis would give much better results, but unfortunately it’s much more costly, time consuming, and complex, so our approach is a potentially useful and lower cost alternative.

This article was written by Andrey Shalitkin, a Toptal Java developer.

A New Way of Using Email for Support Apps: An AWS Tutorial

Email may not be as cool as other communication platforms but working with it can still be fun. I was recently tasked with implementing messaging in a mobile app. The only catch was that the actual communication needed to be over email. We wanted app users to be able to communicate with a support team just like you would send a text message. Support team members needed to receive these messages via email, and also needed to be able to respond to the originating user. To the end user, everything needed to look and function just like any other modern messaging app.

In this article, we will take a look at how to implement a service similar to the one described above using Java and a handful of Amazon’s web services. You will need a valid AWS account, a domain name, and access to your favorite Java IDE.

The Infrastructure

Before we write any code, we’re going to set up the required AWS services for routing and consuming email. We’re going to use SES for sending and consuming emails and SNS+SQS for routing incoming messages.

Consuming Email Programmatically Using AWS

Revitalize e-mail in support applications with Amazon SES.

It all starts here with SES. Start by logging into your AWS account and navigating to the SES console.

Before we begin, you’re going to need a verified domain name you can send emails from.

This will be the domain app users will be sending email messages from and support members will be replying to. Verifying a domain with SES is a straightforward process, and more info can be found here.

If this is the first time you are using SES, or you have not requested a sending limit, your account will be sandboxed. This means that you will not be able to send email to addresses that aren’t verified with AWS. This may cause an error later in the tutorial, when we send an email to our fictional help desk. To avoid this, you can verify whatever email address you plan on using as your help desk in the SES console in the Email Addresses tab.

Once you have a verified domain, we can create a rule set. Navigate to the Rule Sets tab in the SES console and create a new Receipt Rule.

The first step when creating a receipt rule will be defining a recipient.

Recipients filters will allow you to define what emails SES will consume, and how to process each incoming message. The recipient we define here needs to match the domain and address pattern app user messages are emailed from. The simplest case here would be to add a recipient for the domain we previously verified, in our case This will configure SES to apply our rule to all emails sent to (e.g.,

To create a rule for our entire domain, we would add a recipient for

It’s also possible to match address patterns. This is useful if you want to route incoming messages to different SQS queues.

Say that we have queue A and queue B. We could add two recipients: and If we want to insert a message into queue A, we would email The a part of this will match our recipient. Everything between the + and @ is arbitrary user data, it will not affect SES’s address matching. To insert into queue B, simply replace a with b.

After you define your recipients, the next step is to configure the action SES will perform after consuming a new email. We eventually want these to end up in SQS, however it is currently not possible to go directly from SES to SQS. To bridge the gap, we need to use SNS. Select the SNS action and create a new topic. We will eventually configure this topic to insert messages into SQS.

Select create SNS topic and give it a name.

After the topic is created, we need to select a message encoding. I’m going to use Base64 in order to preserve special characters. The encoding you choose will affect how messages are decoded when we consume them in our service.

Once the rule is set, we just need to name it.

The next step will be configuring SQS and SNS, for that we need to head over to the SQS console and create a new queue.

To keep things simple, I’m using the same name as our SNS topic.

After we define our queue, we’re going to need to adjust its access policy. We only want to grant our SNS topic permission to insert. We can achieve this by adding a condition that matches our SNS topic arn.

The value field should be populated with the ARN for the SNS topic SES is notifying.

After SQS is set up, it’s time for one more trip back to the SNS console to configure your topic to insert notifications into your shiny new SQS queue.

In the SNS console, select the topic SES is notifying. From there, create a new subscription. The subscription protocol should be Amazon SQS, and the destination should be the ARN of the SQS queue you just generated.

After all that, the AWS side of the equation should be all set up. We can test our work by emailing ourselves. Send an email to the domain configured with SES, then head to the SQS console and select your queue. You should be able to see the payload containing your email.

Java Service to Deal with Emails

Now on to the fun part! In this section, we’re going to create a simple microservice capable of sending messages and processing incoming emails. The first step will be defining an API that will email our support desk on behalf of a user.

A quick note. We’re going to focus on the business logic components of this service, and won’t be defining REST endpoints or a persistence layer.

To build a Spring service, we’re going to use Spring Boot and Maven. We can use Spring Initializer to generate a project for us,

To start, our pom.xml should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi=""


	<description>A simple "micro-service" for emailing support on behalf of a user and processing replies</description>

		<relativePath/> <!-- lookup parent from repository -->





Emailing Support on Behalf of a User

First, let’s define a bean for emailing our support desk on behalf of a user. The job of this bean will be to process an incoming message from a user ID, and email that message to our pre-defined support desk email address.

Let’s start by defining an interface.

public interface SupportBean {

     * Send a message to the application support desk on behalf of a user
     * @param fromUserId The ID of the originating user
     * @param message The message to send
    void messageSupport(long fromUserId, String message);

And an empty implementation:

public class SupportBeanSesImpl implements SupportBean {

     * Email address for our application help-desk
     * This is the destination address user support emails will be sent to
    private static final String SUPPORT_EMAIL_ADDRESS = "";

    public void messageSupport(long fromUserId, String message) {
        //todo: send an email to our support address

Let’s also add the AWS SDK to our pom, we’re going to use the SES client to send our emails:


The first thing we need to do is generate an email address to send our user’s message from. The address we generate will play a critical role on the consuming side of our service. It needs to contain enough information to route the help desk’s reply back to the originating user.

To achieve this, we’re going to include the originating user ID in our generated email address. To keep things clean, we’re going to create an object containing the user ID and use the Base64 encoded JSON string of it as the email address.

Let’s create a new bean responsible for turning a user ID into an email address.

public interface UserEmailBean {

     * Returns a unique per user email address
     * @param userID Input user ID
     * @return An email address unique for the input userID
    String emailAddressForUserID(long userID);

Let’s start our implementation by adding the required consents and a simple inner class that will help us serialize our JSON.

public class UserEmailBeanJSONImpl implements UserEmailBean {

     * The TLD for all our generated email addresses
    private static final String EMAIL_DOMAIN = "";

     * com.fasterxml.jackson.databind.ObjectMapper used to create a JSON object including our user ID
    private final ObjectMapper objectMapper = new ObjectMapper();

    public String emailAddressForUserID(long userID) {
//todo: create the email address
        return null;

     * Simple helper class we will serialize.
     * The JSON representation of this class will become our user email address
    private static class UserDetails{
        private Long userID;

        public Long getUserID() {
            return userID;

        public void setUserID(Long userID) {
            this.userID = userID;

Generating our email address is straightforward, all we need to do is create a UserDetails object and Base64 encode the JSON representation. The finished version of our createAddressForUserID method should look something like this:

    public String emailAddressForUserID(long userID) {
        UserDetails userDetails = new UserDetails();
        //create a JSON representation.
        String jsonString = objectMapper.writeValueAsString(userDetails);
        //Base64 encode it
        String base64String = Base64.getEncoder().encodeToString(jsonString.getBytes());
        //create an email address out of it
        String emailAddress = base64String + "@" + EMAIL_DOMAIN;
        return emailAddress;

Now we can head back to SupportBeanSesImpl and update it to use the new email bean we just created.

private final UserEmailBean userEmailBean;

public SupportBeanSesImpl(UserEmailBean userEmailBean) {
        this.userEmailBean = userEmailBean;

public void messageSupport(long fromUserId, String message) throws JsonProcessingException {
        //user specific email
        String fromEmail = userEmailBean.emailAddressForUserID(fromUserId);

To send emails, we’re going to use the AWS SES client included with the AWS SDK.

     * SES client
    private final AmazonSimpleEmailService amazonSimpleEmailService = new AmazonSimpleEmailServiceClient(
            new DefaultAWSCredentialsProviderChain() //see

We’re utilizing the DefaultAWSCredentialsProviderChain to manage credentials for us, this class will search for AWS credentials as defined here.

We’re going to an AWS access key provisioned with access to SES and eventually SQS. For more info check out the documentation from Amazon.

The next step will be updating our messageSupport method to email support using the AWS SDK. The SES SDK makes this a straightforward process. The finished method should look something like this:

public void messageSupport(long fromUserId, String message) throws JsonProcessingException {
        //User specific email
        String fromEmail = userEmailBean.emailAddressForUserID(fromUserId);

        //create the email 
        Message supportMessage = new Message(
                new Content("New support request from userID " + fromUserId), //Email subject
                new Body().withText(new Content(message)) //Email body, this contains the user’s message
        //create the send request
        SendEmailRequest supportEmailRequest = new SendEmailRequest(
                fromEmail, //From address, our user's generated email
                new Destination(Collections.singletonList(SUPPORT_EMAIL_ADDRESS)), //to address, our support email address
                supportMessage //Email body defined above
        //Send it off

To try it out, create a test class and inject the SupportBean. Make sure SUPPORT_EMAIL_ADDRESS defined in SupportBeanSesImpl points to an email address you own. If your SES account is sandboxed, this address also needs to be verified. Email addresses can be verified in the SES console under Email Addresses section.

public void emailSupport() throws JsonProcessingException {
	supportBean.messageSupport(1, "Hello World!");

After running this, you should see a message show up in your inbox. Better yet, reply to the message and check the SQS queue we set up earlier. You should see a payload containing your reply.

Consuming Replies from SQS

The last step will be to read in emails from SQS, parse out the email message, and figure out what user ID the reply should be forwarded belongs to.

Message queueing services like Amazon SQS play a vital role in service-oriented architecture by allowing services to communicate with each other without having to compromise speed, reliability or scalability.

To listen for new SQS messages, we’re going to use the Spring Cloud AWS messaging SDK. This will allow us to configure a SQS message listener via annotations, and thus avoid quite a bit of boilerplate code.

First, the required dependencies.

Add the Spring Cloud messaging dependency:


And add Spring Cloud AWS to your pom dependency management:


Currently, Spring Cloud AWS doesn’t support annotation driven configuration, so we’re going to have to define an XML bean. Luckily we don’t need much configuration at all, so our bean definition will be pretty light. The main point of this file will be to enable annotation driven queue listeners, this will allow us to annotate a method as an SqsListener.

Create a new XML file named aws-config.xml in your resources folder. Our definition should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

    <!--enable annotation driven queue listeners -->
    <aws-messaging:annotation-driven-queue-listener />
    <!--define our region, this lets us reference queues by name instead of by URL. -->
    <aws-context:context-region region="us-east-1" />


The important part of this file is <aws-messaging:annotation-driven-queue-listener />. We are also defining a default region. This is not necessary, but doing so will allow us to reference our SQS queue by name instead of URL. We are not defining any AWS credentials, by omitting them Spring will default to DefaultAWSCredentialsProviderChain, the same provider we used earlier in our SES bean. More info can be found in the Spring Cloud AWS docs.

To use this XML config in our Spring Boot app, we need to explicitly import it. Head over to your @SpringBootApplication class and import it.

@ImportResource("classpath:aws-config.xml") //Explicit import for our AWS XML bean definition
public class EmailProcessorApplication {

	public static void main(String[] args) {, args);

Now let’s define a bean that will handle incoming SQS messages. Spring Cloud AWS lets us accomplish this with a single annotation!

 * Bean reasonable for polling SQS and processing new emails
public class EmailSqsListener {

    @SuppressWarnings("unused") //IntelliJ isn't quite smart enough to recognize methods marked with @SqsListener yet
    @SqsListener(value = "com-example-ses", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)   //Mark this method as a SQS listener
                                                                                                    //Since we already set up our region we can use the logical queue name here
                                                                                                    //Spring will automatically delete messages if this method executes successfully
    public void consumeSqsMessage(@Headers Map<String, String> headers, //Map of headers returned when requesting a message from SQS
                                                                        //This map will include things like the relieved time, count and message ID
                                  @NotificationMessage String rawJsonMessage   //JSON string representation of our payload
                                                                            //Spring Cloud AWS supports marshaling here as well
                                                                            //For the sake of simplicity we will work with incoming messages as a JSON object
    ) throws Exception{

        //com.amazonaws.util.json.JSONObject included with the AWS SDK
        JSONObject jsonSqsMessage = new JSONObject(rawJsonMessage);


The magic here lies with the @SqsListener annotation. With this, Spring will set up an Executor and start polling SQS for us. Every time a new message is found, our annotated method will be invoked with the message contents. Optionally, Spring Cloud can be configured to marshall incoming messages, giving you the ability to work with strong typed objects inside your queue listener. Additionally, you have the ability to inject a single header or a map of all headers returned from the underlying AWS call.

We’re able to use the logical queue name here since we previously defined the region in aws-config.xml, if we wanted to omit that we would be able to replace the value with our fully qualified SQS URL. We’re also defining a deletion policy, this will configure Spring to delete the incoming message from SQS if its condition is met. There are multiple policies defined in SqsMessageDeletionPolicy, we’re configuring Spring to delete our message if our consumeSqsMessage method executes successfully.

We’re also injecting the returned SQS headers into our method using @Headers, and the injected map will contain metadata related to the queue and payload received. The message body is injected using @NotificationMessage. Spring supports marshalling utilizing Jackson, or via a custom message body converter. For the sake of convenience, we’re just going to inject the raw JSON string and work with it using the JSONObject class included with the AWS SDK.

The payload retrieved from SQS will contain a lot of data. Take a look at the JSONObject to familiarize yourself with the payload returned. Our payload contains data from every AWS service it was passed through, SES, SNS, and finally SQS. For the sake of this tutorial, we really only care about two things: the list of email addresses this was sent to and the email body. Let’s start by parsing out the emails.

//Pull out the array containing all email addresses this was sent to
JSONArray emailAddressArray = jsonSqsMessage.getJSONObject("mail").getJSONArray("destination");
for(int i = 0 ; i < emailAddressArray.length() ; i++){
	String emailAddress = emailAddressArray.getString(i);

Since in the real world, our helpdesk may include more than just the original sender in his or her reply, we’re going to want to verify the address before we parse out the user ID. This will give our support desk both the ability to message multiple users at the same time as well as the ability to include non app users .

Let’s head back over to our UserEmailBean interface and add another method.

 * Returns true if the input email address matches our template
 * @param emailAddress Email to check
 * @return true if it matches
boolean emailMatchesUserFormat(String emailAddress);

In UserEmailBeanJSONImpl, to implement this method we’re going to want to do two things. First, check if the address ends with our EMAIL_DOMAIN, then check if we can marshall it.

    public boolean emailMatchesUserFormat(String emailAddress) {

        //not our address, return right away
        if(!emailAddress.endsWith("@" + EMAIL_DOMAIN)){
            return false;
        //We just care about the email part, not the domain part
        String emailPart = splitEmail(emailAddress);
        try {
            //Attempt to decode our email
            UserDetails userDetails = objectMapper.readValue(Base64.getDecoder().decode(emailPart), UserDetails.class);
            //We assume this email matches if the address is successfully decoded and marshaled  
            return userDetails != null && userDetails.getUserID() != null;
        } catch (IllegalArgumentException | IOException e) {
            //The Base64 decoder will throw an IllegalArgumentException it the input string is not Base64 formatted
            //Jackson will throw an IOException if it can't read the string into the UserDetails class
            return false;
     * Splits an email address on @
     * Returns everything before the @
     * @param emailAddress Address to split
     * @return all parts before @. If no @ is found, the entire address will be returned
    private static String splitEmail(String emailAddress){
            return emailAddress;
        return emailAddress.substring(0, emailAddress.indexOf("@"));

We defined two new methods, emailMatchesUserFormat which we just added to our interface, and a simple utility method for splitting an email address on the @. Our emailMatchesUserFormat implementation works by attempting to Base64 decode and marshall the address part back into our UserDetails helper class. If this succeeds, we then check to make sure the required userID is populated. If all this works out, we can safely assume a match.

Head back to our EmailSqsListener and inject the freshly updated UserEmailBean.

   private final UserEmailBean userEmailBean;

    public EmailSqsListener(UserEmailBean userEmailBean) {
        this.userEmailBean = userEmailBean;

Now we’re going to update the consumeSqsMethod. First let’s parse out the email body:

 //Pull our content, remember the content will be Base64 encoded as per our SES settings
        String encodedContent = jsonSqsMessage.getString("content");
        //Create a new String after decoding our body
        String decodedBody = new String(
        for(int i = 0 ; i < emailAddressArray.length() ; i++){
            String emailAddress = emailAddressArray.getString(i);

Now let’s create a new method that will process the email address and email body.

private void processEmail(String emailAddress, String emailBody){

And finally, update the email loop to invoke this method if it finds a match.

//Loop over all sent to addresses
for(int i = 0 ; i < emailAddressArray.length() ; i++){
    String emailAddress = emailAddressArray.getString(i);
    //If we find a match, process the email and method
        processEmail(emailAddress, decodedBody);

Before we implement processEmail, we need to add one more method to our UserEmailBean. We need a method for returning the userID from an email. Head back over to the UserEmailBean interface to add its last method.

     * Returns the userID from a formatted email address.
     * Returns null if no userID is found. 
     * @param emailAddress Formatted email address, this address should be verified using {@link #emailMatchesUserFormat(String)}
     * @return The originating userID if found, null if not
    Long userIDFromEmail(String emailAddress);

The goal of this method will be to return the userID from a formatted address. The implementation will be similar to our verification method. Let’s head over to UserEmailBeanJSONImpl and fill in this method.

    public Long userIDFromEmail(String emailAddress) {
        String emailPart = splitEmail(emailAddress);
        try {
            //Attempt to decode our email
            UserDetails userDetails = objectMapper.readValue(Base64.getDecoder().decode(emailPart), UserDetails.class);
            if(userDetails == null || userDetails.getUserID() == null){
                //We couldn't find a userID
                return null;
            //ID found, return it
            return userDetails.getUserID();
        } catch (IllegalArgumentException | IOException e) {
            //The Base64 decoder will throw an IllegalArgumentException it the input string is not Base64 formatted
            //Jackson will throw an IOException if it can't read the string into the UserDetails class
            //Return null since we didn't find a userID
            return null;

Now head back over to our EmailSqsListener and update processEmail to use this new method.

private void processEmail(String emailAddress, String emailBody){
    //Parse out the email address
    Long userID = userEmailBean.userIDFromEmail(emailAddress);
    if(userID == null){
        //Whoops, we couldn't find a userID. Abort!

Great! Now we have almost everything we need. The last thing we need to do is parse out the reply from the raw message.

Email clients, just like web browsers from a few years ago, are plagued by the inconsistencies in their implementations.

Parsing out replies from emails is actually a fairly complicated task. Email message formats are not standardized, and the variations between different email clients can be huge. The raw response is also going to include much more than the reply and a signature. The original message will most likely be included as well. Smart people over at Mailgun put together a great blog post explaining some of the challenges. They also open sourced their machine-learning based approach to parsing emails, check it out here.

The Mailgun library is written in Python, so for our tutorial we’re going to use a simpler Java based solution. GitHub user edlio put together an MIT licensed email parser in Java based on one of GitHub’s libraries. We’re going to use this great library.

First let’s update our pom, we’re going to use to pull in EmailReplyParser.


Now add the GitHub dependency.


We’re also going to use Apache commons email. We’re going to need to parse the raw email into a javax.mail MimeMessage before passing it off to the EmailReplyParser. Add the commons dependency.


Now we can head back over to our EmailSqsListener and finish up processEmail. At this point, we have the originating userID and the raw email body. The only thing left to do is parse out the reply.

To accomplish this, we’re going to use a combination of javax.mail and edlio’s EmailReplyParser.

private void processEmail(String emailAddress, String emailBody) throws Exception {
        //Parse out the email address
        Long userID = userEmailBean.userIDFromEmail(emailAddress);
        if(userID == null){
            //Whoops, we couldn't find a userID. Abort!

        //Default javax.mail session
        Session session = Session.getDefaultInstance(new Properties());
        //Create a new mimeMessage out of the raw email body
        MimeMessage mimeMessage = MimeMessageUtils.createMimeMessage(
        MimeMessageParser mimeMessageParser = new MimeMessageParser(mimeMessage);
        //Parse the message
        //Parse out the reply for our message
        String replyText = EmailReplyParser.parseReply(mimeMessageParser.getPlainContent());
        //Now we're done!
        //We have both the userID and the response!
        System.out.println("Processed reply for userID: " + userID + ". Reply: " + replyText);

Wrap Up

And that’s it! We now have everything we need to deliver a response to the originating user!

See? I told you email can be fun!

In this article, we saw how Amazon Web Services can be used to orchestrate complex pipelines. Although in this article, the pipeline was designed around emails; these same tools can be leveraged to design even more complex systems, where you don’t have to worry about maintaining the infrastructure and can focus on the fun aspects of software engineering instead.

This article was written by Francis Altomare, a Toptal Java developer.

Go Programming Language: An Introductory Tutorial

What’s the Go Programming Language?

Go is a recent language which sits neatly in the middle of the landscape, providing lots of good features and deliberately omitting many bad ones. It compiles fast, runs fast-ish, includes a runtime and garbage collection, has a simple static type system and dynamic interfaces, and an excellent standard library.

Go and OOP

OOP is one of those features that Go deliberately omits. It has no subclassing, and so there are no inheritance diamonds or super calls or virtual methods to trip you up. Still, many of the useful parts of OOP are available in other ways.

*Mixins* are available by embedding structs anonymously, allowing their methods to be called directly on the containing struct (see embedding). Promoting methods in this way is called *forwarding*, and it’s not the same as subclassing: the method will still be invoked on the inner, embedded struct.

Embedding also doesn’t imply polymorphism. While `A` may have a `B`, that doesn’t mean it is a `B` — functions which take a `B` won’t take an `A` instead. For that, we need interfaces, which we’ll encounter briefly later.

Meanwhile, Go takes a strong position on features that can lead to confusion and bugs. It omits OOP idioms such as inheritance and polymorphism, in favor of composition and simple interfaces. It downplays exception handling in favour of explicit errors in return values. There is exactly one correct way to lay out Go code, enforced by the gofmt tool. And so on.

Go is also a great language for writing concurrent programs: programs with many independently running parts. An obvious example is a webserver: Every request runs separately, but requests often need to share resources such as sessions, caches, or notification queues. This means skilled Go programmers need to deal with concurrent access to those resources.

While the Go language has an excellent set of low-level features for handling concurrency, using them directly can become complicated. In many cases, a handful of reusable abstractions over those low-level mechanisms makes life much easier.

In today’s Go programming tutorial, we’re going to look at one such abstraction: A wrapper which can turn any data structure into a transactional service. We’ll use a Fund type as an example – a simple store for our startup’s remaining funding, where we can check the balance and make withdrawals.

In this introduction to programming in Go, we’ll build the service in small steps, making a mess along the way and then cleaning it up again. Along the way, we’ll encounter lots of cool Go features, including:

  • Struct types and methods
  • Unit tests and benchmarks
  • Goroutines and channels
  • Interfaces and dynamic typing

A Simple Fund

Let’s write some code to track our startup’s funding. The fund starts with a given balance, and money can only be withdrawn (we’ll figure out revenue later).

This graphic depicts a simple goroutine example using the Go programming language.

Go is deliberately not an object-oriented language: There are no classes, objects, or inheritance. Instead, we’ll declare a struct type called Fund, with a simple function to create new fund structs, and two public methods.


package funding

type Fund struct {
    // balance is unexported (private), because it's lowercase
    balance int

// A regular function returning a pointer to a fund
func NewFund(initialBalance int) *Fund {
    // We can return a pointer to a new struct without worrying about
    // whether it's on the stack or heap: Go figures that out for us.
    return &Fund{
        balance: initialBalance,

// Methods start with a *receiver*, in this case a Fund pointer
func (f *Fund) Balance() int {
    return f.balance

func (f *Fund) Withdraw(amount int) {
    f.balance -= amount

Testing with benchmarks

Next we need a way to test Fund. Rather than writing a separate program, we’ll use Go’s testing package, which provides a framework for both unit tests and benchmarks. The simple logic in our Fund isn’t really worth writing unit tests for, but since we’ll be talking a lot about concurrent access to the fund later on, writing a benchmark makes sense.

Benchmarks are like unit tests, but include a loop which runs the same code many times (in our case, fund.Withdraw(1)). This allows the framework to time how long each iteration takes, averaging out transient differences from disk seeks, cache misses, process scheduling, and other unpredictable factors.

The testing framework wants each benchmark to run for at least 1 second (by default). To ensure this, it will call the benchmark multiple times, passing in an increasing “number of iterations” value each time (the b.Nfield), until the run takes at least a second.

For now, our benchmark will just deposit some money and then withdraw it one dollar at a time.


package funding

import "testing"

func BenchmarkFund(b *testing.B) {
    // Add as many dollars as we have iterations this run
    fund := NewFund(b.N)

    // Burn through them one at a time until they are all gone
    for i := 0; i < b.N; i++ {

    if fund.Balance() != 0 {
        b.Error("Balance wasn't zero:", fund.Balance())

Now let’s run it:

$ go test -bench . funding
testing: warning: no tests to run
BenchmarkWithdrawals    2000000000             1.69 ns/op
ok      funding    3.576s

That went well. We ran two billion (!) iterations, and the final check on the balance was correct. We can ignore the “no tests to run” warning, which refers to the unit tests we didn’t write (in later Go programming examples in this tutorial, the warning is snipped out).

Concurrent Access

Now let’s make the benchmark concurrent, to model different users making withdrawals at the same time. To do that, we’ll spawn ten goroutines and have each of them withdraw one tenth of the money.

How would we structure muiltiple concurrent goroutines in the Go language?

Goroutines are the basic building block for concurrency in the Go language. They are green threads – lightweight threads managed by the Go runtime, not by the operating system. This means you can run thousands (or millions) of them without any significant overhead. Goroutines are spawned with the gokeyword, and always start with a function (or method call):

// Returns immediately, without waiting for `DoSomething()` to complete
go DoSomething()

Often, we want to spawn off a short one-time function with just a few lines of code. In this case we can use a closure instead of a function name:

go func() {
    // ... do stuff ...
}() // Must be a function *call*, so remember the ()

Once all our goroutines are spawned, we need a way to wait for them to finish. We could build one ourselves using channels, but we haven’t encountered those yet, so that would be skipping ahead.

For now, we can just use the WaitGroup type in Go’s standard library, which exists for this very purpose. We’ll create one (called “wg”) and call wg.Add(1) before spawning each worker, to keep track of how many there are. Then the workers will report back using wg.Done(). Meanwhile in the main goroutine, we can just say wg.Wait() to block until every worker has finished.

Inside the worker goroutines in our next example, we’ll use defer to call wg.Done().

defer takes a function (or method) call and runs it immediately before the current function returns, after everything else is done. This is handy for cleanup:

func() {
    defer resource.Unlock()

    // Do stuff with resource

This way we can easily match the Unlock with its Lock, for readability. More importantly, a deferred function will run even if there is a panic in the main function (something that we might handle via try-finally in other languages).

Lastly, deferred functions will execute in the reverse order to which they were called, meaning we can do nested cleanup nicely (similar to the C idiom of nested gotos and labels, but much neater):

func() {
    defer db.Disconnect()

    // If Begin panics, only db.Disconnect() will execute
    defer transaction.Close()

    // From here on, transaction.Close() will run first,
    // and then db.Disconnect()

    // ...

OK, so with all that said, here’s the new version:


package funding

import (

const WORKERS = 10

func BenchmarkWithdrawals(b *testing.B) {
    // Skip N = 1
    if b.N < WORKERS {

    // Add as many dollars as we have iterations this run
    fund := NewFund(b.N)

    // Casually assume b.N divides cleanly
    dollarsPerFounder := b.N / WORKERS

    // WaitGroup structs don't need to be initialized
    // (their "zero value" is ready to use).
    // So, we just declare one and then use it.
    var wg sync.WaitGroup

    for i := 0; i < WORKERS; i++ {
        // Let the waitgroup know we're adding a goroutine
        // Spawn off a founder worker, as a closure
        go func() {
            // Mark this worker done when the function finishes
            defer wg.Done()

            for i := 0; i < dollarsPerFounder; i++ {
        }() // Remember to call the closure!

    // Wait for all the workers to finish

    if fund.Balance() != 0 {
        b.Error("Balance wasn't zero:", fund.Balance())

We can predict what will happen here. The workers will all execute Withdraw on top of each other. Inside it, f.balance -= amount will read the balance, subtract one, and then write it back. But sometimes two or more workers will both read the same balance, and do the same subtraction, and we’ll end up with the wrong total. Right?

$ go test -bench . funding
BenchmarkWithdrawals    2000000000             2.01 ns/op
ok      funding    4.220s

No, it still passes. What happened here?

Remember that goroutines are green threads – they’re managed by the Go runtime, not by the OS. The runtime schedules goroutines across however many OS threads it has available. At the time of writing this Go language tutorial, Go doesn’t try to guess how many OS threads it should use, and if we want more than one, we have to say so. Finally, the current runtime does not preempt goroutines – a goroutine will continue to run until it does something that suggests it’s ready for a break (like interacting with a channel).

All of this means that although our benchmark is now concurrent, it isn’t parallel. Only one of our workers will run at a time, and it will run until it’s done. We can change this by telling Go to use more threads, via the GOMAXPROCS environment variable.

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4    --- FAIL: BenchmarkWithdrawals-4
    account_test.go:39: Balance wasn't zero: 4238
ok      funding    0.007s

That’s better. Now we’re obviously losing some of our withdrawals, as we expected.

In this Go programming example, the outcome of multiple parallel goroutines is not favorable.

Make it a server

At this point we have various options. We could add an explicit mutex or read-write lock around the fund. We could use a compare-and-swap with a version number. We could go all out and use a CRDT scheme (perhaps replacing the balance field with lists of transactions for each client, and calculating the balance from those).

But we won’t do any of those things now, because they’re messy or scary or both. Instead, we’ll decide that a fund should be a server. What’s a server? It’s something you talk to. In Go, things talk via channels.

Channels are the basic communication mechanism between goroutines. Values are sent to the channel (with channel <- value), and can be received on the other side (with value = <- channel). Channels are “goroutine safe”, meaning that any number of goroutines can send to and receive from them at the same time.


Buffering communication channels can be a performance optimization in certain circumstances, but it should be used with great care (and benchmarking!).

However, there are uses for buffered channels which aren’t directly about communication.

For instance, a common throttling idiom creates a channel with (for example) buffer size `10` and then sends ten tokens into it immediately. Any number of worker goroutines are then spawned, and each receives a token from the channel before starting work, and sends it back afterward. Then, however many workers there are, only ten will ever be working at the same time.

By default, Go channels are unbuffered. This means that sending a value to a channel will block until another goroutine is ready to receive it immediately. Go also supports fixed buffer sizes for channels (using make(chan someType, bufferSize)). However, for normal use, this is usually a bad idea.

Imagine a webserver for our fund, where each request makes a withdrawal. When things are very busy, the FundServer won’t be able to keep up, and requests trying to send to its command channel will start to block and wait. At that point we can enforce a maximum request count in the server, and return a sensible error code (like a 503 Service Unavailable) to clients over that limit. This is the best behavior possible when the server is overloaded.

Adding buffering to our channels would make this behavior less deterministic. We could easily end up with long queues of unprocessed commands based on information the client saw much earlier (and perhaps for requests which had since timed out upstream). The same applies in many other situations, like applying backpressure over TCP when the receiver can’t keep up with the sender.

In any case, for our Go example, we’ll stick with the default unbuffered behavior.

We’ll use a channel to send commands to our FundServer. Every benchmark worker will send commands to the channel, but only the server will receive them.

We could turn our Fund type into a server implementation directly, but that would be messy – we’d be mixing concurrency handling and business logic. Instead, we’ll leave the Fund type exactly as it is, and make FundServer a separate wrapper around it.

Like any server, the wrapper will have a main loop in which it waits for commands, and responds to each in turn. There’s one more detail we need to address here: The type of the commands.

A diagram of the fund being used as the server in this Go programming tutorial.


We could have made our commands channel take *pointers* to commands (`chan *TransactionCommand`). Why didn’t we?

Passing pointers between goroutines is risky, because either goroutine might modify it. It’s also often less efficient, because the other goroutine might be running on a different CPU core (meaning more cache invalidation).

Whenever possible, prefer to pass plain values around.

In the next section below, we’ll be sending several different commands, each with its own struct type. We want the server’s Commands channel to accept any of them. In an OOP language we might do this via polymorphism: Have the channel take a superclass, of which the individual command types were subclasses. In Go, we use interfaces instead.

An interface is a set of method signatures. Any type that implements all of those methods can be treated as that interface (without being declared to do so). For our first run, our command structs won’t actually expose any methods, so we’re going to use the empty interface, interface{}. Since it has no requirements, any value (including primitive values like integers) satisfies the empty interface. This isn’t ideal – we only want to accept command structs – but we’ll come back to it later.

For now, let’s get started with the scaffolding for our server:


package funding

type FundServer struct {
    Commands chan interface{}
    fund Fund

func NewFundServer(initialBalance int) *FundServer {
    server := &FundServer{
        // make() creates builtins like channels, maps, and slices
        Commands: make(chan interface{}),
        fund: NewFund(initialBalance),

    // Spawn off the server's main loop immediately
    go server.loop()
    return server

func (s *FundServer) loop() {
    // The built-in "range" clause can iterate over channels,
    // amongst other things
    for command := range s.Commands {
        // Handle the command

Now let’s add a couple of struct types for the commands:

type WithdrawCommand struct {
    Amount int

type BalanceCommand struct {
    Response chan int

The WithdrawCommand just contains the amount to withdraw. There’s no response. The BalanceCommand does have a response, so it includes a channel to send it on. This ensures that responses will always go to the right place, even if our fund later decides to respond out-of-order.

Now we can write the server’s main loop:

func (s *FundServer) loop() {
    for command := range s.Commands {

        // command is just an interface{}, but we can check its real type
        switch command.(type) {

        case WithdrawCommand:
            // And then use a "type assertion" to convert it
            withdrawal := command.(WithdrawCommand)

        case BalanceCommand:
            getBalance := command.(BalanceCommand)
            balance :=
            getBalance.Response <- balance

            panic(fmt.Sprintf("Unrecognized command: %v", command))

Hmm. That’s sort of ugly. We’re switching on the command type, using type assertions, and possibly crashing. Let’s forge ahead anyway and update the benchmark to use the server.

func BenchmarkWithdrawals(b *testing.B) {
    // ...

    server := NewFundServer(b.N)

    // ...

    // Spawn off the workers
    for i := 0; i < WORKERS; i++ {
        go func() {
            defer wg.Done()
            for i := 0; i < dollarsPerFounder; i++ {
                server.Commands <- WithdrawCommand{ Amount: 1 }

    // ...

    balanceResponseChan := make(chan int)
    server.Commands <- BalanceCommand{ Response: balanceResponseChan }
    balance := <- balanceResponseChan

    if balance != 0 {
        b.Error("Balance wasn't zero:", balance)

That was sort of ugly too, especially when we checked the balance. Never mind. Let’s try it:

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4     5000000           465 ns/op
ok      funding    2.822s

Much better, we’re no longer losing withdrawals. But the code is getting hard to read, and there are more serious problems. If we ever issue a BalanceCommand and then forget to read the response, our fund server will block forever trying to send it. Let’s clean things up a bit.

Make it a service

A server is something you talk to. What’s a service? A service is something you talk to with an API. Instead of having client code work with the command channel directly, we’ll make the channel unexported (private) and wrap the available commands up in functions.

type FundServer struct {
    commands chan interface{} // Lowercase name, unexported
    // ...

func (s *FundServer) Balance() int {
    responseChan := make(chan int)
    s.commands <- BalanceCommand{ Response: responseChan }
    return <- responseChan

func (s *FundServer) Withdraw(amount int) {
    s.commands <- WithdrawCommand{ Amount: amount }

Now our benchmark can just say server.Withdraw(1) and balance := server.Balance(), and there’s less chance of accidentally sending it invalid commands or forgetting to read responses.

Here is what using the fund as a service might look like in this sample Go language program.

There’s still a lot of extra boilerplate for the commands, but we’ll come back to that later.


Eventually, the money always runs out. Let’s agree that we’ll stop withdrawing when our fund is down to its last ten dollars, and spend that money on a communal pizza to celebrate or commiserate around. Our benchmark will reflect this:

// Spawn off the workers
for i := 0; i < WORKERS; i++ {
    go func() {
        defer wg.Done()
        for i := 0; i < dollarsPerFounder; i++ {

            // Stop when we're down to pizza money
            if server.Balance() <= 10 {

// ...

balance := server.Balance()
if balance != 10 {
    b.Error("Balance wasn't ten dollars:", balance)

This time we really can predict the result.

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4    --- FAIL: BenchmarkWithdrawals-4
    fund_test.go:43: Balance wasn't ten dollars: 6
ok      funding    0.009s

We’re back where we started – several workers can read the balance at once, and then all update it. To deal with this we could add some logic in the fund itself, like a minimumBalance property, or add another command called WithdrawIfOverXDollars. These are both terrible ideas. Our agreement is amongst ourselves, not a property of the fund. We should keep it in application logic.

What we really need is transactions, in the same sense as database transactions. Since our service executes only one command at a time, this is super easy. We’ll add a Transact command which contains a callback (a closure). The server will execute that callback inside its own goroutine, passing in the raw Fund. The callback can then safely do whatever it likes with the Fund.

Semaphores and errors

In this next example we’re doing two small things wrong.

First, we’re using a `Done` channel as a semaphore to let calling code know when its transaction has finished. That’s fine, but why is the channel type `bool`? We’ll only ever send `true` into it to mean “done” (what would sending `false` even mean?). What we really want is a single-state value (a value that has no value?). In Go, we can do this using the empty struct type: `struct{}`. This also has the advantage of using less memory. In the example we’ll stick with `bool` so as not to look too scary.

Second, our transaction callback isn’t returning anything. As we’ll see in a moment, we can get values out of the callback into calling code using scope tricks. However, transactions in a real system would presumably fail sometimes, so the Go convention would be to have the transaction return an `error` (and then check whether it was `nil` in calling code).

We’re not doing that either for now, since we don’t have any errors to generate.

// Typedef the callback for readability
type Transactor func(fund *Fund)

// Add a new command type with a callback and a semaphore channel
type TransactionCommand struct {
    Transactor Transactor
    Done chan bool

// ...

// Wrap it up neatly in an API method, like the other commands
func (s *FundServer) Transact(transactor Transactor) {
    command := TransactionCommand{
        Transactor: transactor,
        Done: make(chan bool),
    s.commands <- command
    <- command.Done

// ...

func (s *FundServer) loop() {
    for command := range s.commands {
        switch command.(type) {
        // ...

        case TransactionCommand:
            transaction := command.(TransactionCommand)
            transaction.Done <- true

        // ...

Our transaction callbacks don’t directly return anything, but the Go language makes it easy to get values out of a closure directly, so we’ll do that in the benchmark to set the pizzaTime flag when money runs low:

pizzaTime := false
for i := 0; i < dollarsPerFounder; i++ {

    server.Transact(func(fund *Fund) {
        if fund.Balance() <= 10 {
            // Set it in the outside scope
            pizzaTime = true

    if pizzaTime {

And check that it works:

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4     5000000           775 ns/op
ok      funding    4.637s

Nothing But transactions

You may have spotted an opportunity to clean things up some more now. Since we have a generic Transactcommand, we don’t need WithdrawCommand or BalanceCommand anymore. We’ll rewrite them in terms of transactions:

func (s *FundServer) Balance() int {
    var balance int
    s.Transact(func(f *Fund) {
        balance = f.Balance()
    return balance

func (s *FundServer) Withdraw(amount int) {
    s.Transact(func (f *Fund) {

Now the only command the server takes is TransactionCommand, so we can remove the whole interface{}mess in its implementation, and have it accept only transaction commands:

type FundServer struct {
    commands chan TransactionCommand
    fund *Fund

func (s *FundServer) loop() {
    for transaction := range s.commands {
        // Now we don't need any type-switch mess
        transaction.Done <- true

Much better.

There’s a final step we could take here. Apart from its convenience functions for Balance and Withdraw, the service implementation is no longer tied to Fund. Instead of managing a Fund, it could manage an interface{} and be used to wrap anything. However, each transaction callback would then have to convert the interface{} back to a real value:

type Transactor func(interface{})

server.Transact(func(managedValue interface{}) {
    fund := managedValue.(*Fund)
    // Do stuff with fund ...

This is ugly and error-prone. What we really want is compile-time generics, so we can “template” out a server for a particular type (like *Fund).

Unfortunately, Go doesn’t support generics – yet. It’s expected to arrive eventually, once someone figures out some sensible syntax and semantics for it. In the meantime, careful interface design often removes the need for generics, and when they don’t we can get by with type assertions (which are checked at runtime).

So we’re done, right?


Well, okay, no.

For instance:

  • A panic in a transaction will kill the whole service.
  • There are no timeouts. A transaction that never returns will block the service forever.
  • If our Fund grows some new fields and a transaction crashes halfway through updating them, we’ll have inconsistent state.
  • Transactions are able to leak the managed Fund object, which isn’t good.
  • There’s no reasonable way to do transactions across multiple funds (like withdrawing from one and depositing in another). We can’t just nest our transactions because it would allow deadlocks.
  • Running a transaction asynchronously now requires a new goroutine and a lot of messing around. Relatedly, we probably want to be able to read the most recent Fund state from elsewhere while a long-running transaction is in progress.

In the next Go programming tutorial, we’ll look at some ways to address these issues.

This article was written by Brendon Hogger, a Toptal Python developer.