Blog Archives

Tips And Tools For Optimizing Android Apps


Android devices have a lot of cores, so writing smooth apps is a simple task for anyone, right? Wrong. As everything on Android can be done in a lot of different ways, picking the best option can be tough. If you want to choose the most efficient method, you have to know what’s happening under the hood. Luckily, you don’t have to rely on your feelings or sense of smell, since there’s a lot of tools out there that can help you find bottlenecks by measuring and describing what’s going on. Properly optimized and smooth apps greatly improve the user experience, and also drain less battery.

Let’s see some numbers first to consider how important optimization really is. According to a Nimbledroid post, 86% of users (including me) have uninstalled apps after using them only once due to poor performance. If you’re loading some content, you have less than 11 seconds to show it to the user. Only every third user will give you more time. You might also get a lot of bad reviews on Google Play because of it.

Build Better Apps: Android Performance Patterns

Testing your users’ patience is a shortcut to uninstallation.

The first thing every user notices over and over is the app’s startup time. According to another Nimbledroid post, out of the 100 top apps, 40 start in under 2 seconds, and 70 start in under 3 seconds. So if possible, you should generally display some content as soon as possible and delay the background checks and updates a bit.

Always remember, premature optimization is the root of all evil. You should also not waste too much time with micro optimization. You will see the most benefit of optimizing code that runs often. For example, this includes the onDraw() function, which runs every frame, ideally 60 times per second. Drawing is the slowest operation out there, so try redrawing only what you have to. More about this will come later.

Performance Tips

Enough theory, here is a list of some of the things you should consider if performance matters to you.

1. String vs StringBuilder

Let’s say that you have a String, and for some reason you want to append more Strings to it 10 thousand times. The code could look something like this.

String string = "hello";
for (int i = 0; i < 10000; i++) {
    string += " world";
}

You can see on the Android Studio Monitors how inefficient some String concatenation can be. There’s tons of Garbage Collections (GC) going on.

String vs StringBuilder

This operation takes around 8 seconds on my fairly good device, which has Android 5.1.1. The more efficient way of achieving the same goal is using a StringBuilder, like this.

StringBuilder sb = new StringBuilder("hello");
for (int i = 0; i < 10000; i++) {
    sb.append(" world");
}
String string = sb.toString();

On the same device this happens almost instantly, in less than 5ms. The CPU and Memory visualizations are almost totally flat, so you can imagine how big this improvement is. Notice though, that for achieving this difference, we had to append 10 thousand Strings, which you probably don’t do often. So in case you are adding just a couple Strings once, you will not see any improvement. By the way, if you do:

String string = "hello" + " world";

It gets internally converted to a StringBuilder, so it will work just fine.

You might be wondering, why is concatenating Strings the first way so slow? It is caused by the fact that Strings are immutable, so once they are created, they cannot be changed. Even if you think you are changing the value of a String, you are actually creating a new String with the new value. In an example like:

String myString = "hello";
myString += " world";

What you will get in memory is not 1 String “hello world”, but actually 2 Strings. The String myString will contain “hello world”, as you would expect. However, the original String with the value “hello” is still alive, without any reference to it, waiting to be garbage collected. This is also the reason why you should store passwords in a char array instead of a String. If you store a password as a String, it will stay in the memory in human-readable format until the next GC for an unpredictable length of time. Back to the immutability described above, the String will stay in the memory even if you assign it another value after using it. If you, however, empty the char array after using the password, it will disappear from everywhere.

2. Picking the Correct Data Type

Before you start writing code, you should decide what data types you will use for your collection. For example, should you use a Vector or an ArrayList? Well, it depends on your usecase. If you need a thread-safe collection, which will allow only one thread at once to work with it, you should pick a Vector, as it is synchronized. In other cases you should probably stick to an ArrayList, unless you really have a specific reason to use vectors.

How about the case when you want a collection with unique objects? Well, you should probably pick a Set. They cannot contain duplicates by design, so you will not have to take care of it yourself. There are multiple types of sets, so pick one that fits your use case. For a simple group of unique items, you can use a HashSet. If you want to preserve the order of items in which they were inserted, pick a LinkedHashSet. A TreeSet sorts items automatically, so you will not have to call any sorting methods on it. It should also sort the items efficiently, without you having to think of sorting algorithms.

Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
— Rob Pike’s 5 Rules of Programming

Sorting integers or strings is pretty straightforward. However, what if you want to sort a class by some property? Let’s say you are writing a list of meals you eat, and store their names and timestamps. How would you sort the meals by timestamp from the lowest to highest? Luckily, it’s pretty simple. It’s enough to implement the Comparable interface in the Meal class and override the compareTo() function. To sort the meals by lowest timestamp to highest, we could write something like this.

@Override
public int compareTo(Object object) {
    Meal meal = (Meal) object;
    if (this.timestamp < meal.getTimestamp()) {
        return -1;
    } else if (this.timestamp > meal.getTimestamp()) {
        return 1;
    }
    return 0;
}

3. Location Updates

There are a lot of apps out there which collect the user’s location. You should use the Google Location Services API for that purpose, which contains a lot of useful functions. There is a separate article about using it, so I will not repeat it.

I’d just like to stress some important points from a performance perspective.

First of all, use only the most precise location as you need. For example, if you are doing some weather forecasting, you don’t need the most accurate location. Getting just a very rough area based on the network is faster, and more battery efficient. You can achieve it by setting the priority to LocationRequest.PRIORITY_LOW_POWER.

You can also use a function of LocationRequest called setSmallestDisplacement(). Setting this in meters will cause your app to not be notified about location change if it was smaller than the given value. For example, if you have a map with nearby restaurants around you, and you set the smallest displacement to 20 meters, the app will not be making requests for checking restaurants if the user is just walking around in a room. The requests would be useless, as there wouldn’t be any new nearby restaurant anyway.

The second rule is requesting location updates only as often as you need them. This is quite self explanatory. If you are really building that weather forecast app, you do not need to request the location every few seconds, as you probably don’t have such precise forecasts (contact me if you do). You can use the setInterval() function for setting the required interval in which the device will be updating your app about the location. If multiple apps keep requesting the user’s location, every app will be notified at every new location update, even if you have a higher setInterval() set. To prevent your app from being notified too often, make sure to always set a fastest update interval with setFastestInterval().

And finally, the third rule is requesting location updates only if you need them. If you are displaying some nearby objects on the map every x seconds and the app goes in background, you do not need to know the new location. There is no reason to update the map if the user cannot see it anyway. Make sure to stop listening for location updates when appropriate, preferably in onPause(). You can then resume the updates in onResume().

4. Network Requests

There is a high chance that your app is using the internet for downloading or uploading data. If it is, you have several reasons to pay attention to handling network requests. One of them is mobile data, which is very limited to a lot of people and you shouldn’t waste it.

The second one is battery. Both WiFi and mobile networks can consume quite a lot of it if they are used too much. Let’s say that you want to download 1 kb. To make a network request, you have to wake up the cellular or WiFi radio, then you can download your data. However, the radio will not fall asleep immediately after the operation. It will stay in a fairly active state for about 20-40 more seconds, depending on your device and carrier.

Network Requests

So, what can you do about it? Batch. To avoid waking up the radio every couple seconds, prefetch things that the user might need in the upcoming minutes. The proper way of batching is highly dynamic depending on your app, but if it is possible, you should download the data the user might need in the next 3-4 minutes. One could also edit the batch parameters based on the user’s internet type, or charging state. For example, if the user is on WiFi while charging, you can prefetch a lot more data than if the user is on mobile internet with low battery. Taking all these variables into account can be a tough thing, which only few people would do. Luckily though, there is GCM Network Manager to the rescue!

GCM Network Manager is a really helpful class with a lot of customizable attributes. You can easily schedule both repeating and one-off tasks. At repeating tasks you can set the lowest, as well as the highest repeat interval. This will allow batching not only your requests, but also requests from other apps. The radio has to be woken up only once per some period, and while it’s up, all apps in the queue download and upload what they are supposed to. This Manager is also aware of the device’s network type and charging state, so you can adjust accordingly. You can find more details and samples in this article, I urge you to check it out. An example task looks like this:

Task task = new OneoffTask.Builder()
    .setService(CustomService.class)
    .setExecutionWindow(0, 30)
    .setTag(LogService.TAG_TASK_ONEOFF_LOG)
    .setUpdateCurrent(false)
    .setRequiredNetwork(Task.NETWORK_STATE_CONNECTED)
    .setRequiresCharging(false)
    .build();

By the way, since Android 3.0, if you do a network request on the main thread, you will get a NetworkOnMainThreadException. That will definitely warn you not to do that again.

5. Reflection

Reflection is the ability of classes and objects to examine their own constructors, fields, methods, and so on. It is used usually for backward compatibility, to check if a given method is available for a particular OS version. If you have to use reflection for that purpose, make sure to cache the response, as using reflection is pretty slow. Some widely used libraries use Reflection too, like Roboguice for dependency injection. That’s the reason why you should prefer Dagger 2. For more details about reflection, you can check a separate post.

6. Autoboxing

Autoboxing and unboxing are processes of converting a primitive type to an Object type, or vice versa. In practice it means converting an int to an Integer. For achieving that, the compiler uses the Integer.valueOf()function internally. Converting is not just slow, Objects also take a lot more memory than their primitive equivalents. Let’s look at some code.

Integer total = 0;
for (int i = 0; i < 1000000; i++) {
    total += i;
}

While this takes 500ms on average, rewriting it to avoid autoboxing will speed it up drastically.

int total = 0;
for (int i = 0; i < 1000000; i++) {
    total += i;
}

This solution runs at around 2ms, which is 25 times faster. If you don’t believe me, test it out. The numbers will be obviously different per device, but it should still be a lot faster. And it’s also a really simple step to optimize.

Okay, you probably don’t create a variable of type Integer like this often. But what about the cases when it is more difficult to avoid? Like in a map, where you have to use Objects, like Map<Integer, Integer>? Look at the solution many people use.

Map<Integer, Integer> myMap = new HashMap<>();
for (int i = 0; i < 100000; i++) {
    myMap.put(i, random.nextInt());
}

Inserting 100k random ints in the map takes around 250ms to run. Now look at the solution with SparseIntArray.

SparseIntArray myArray = new SparseIntArray();
for (int i = 0; i < 100000; i++) {
    myArray.put(i, random.nextInt());
}

This takes a lot less, roughly 50ms. It’s also one of the easier methods for improving performance, as nothing complicated has to be done, and the code also stays readable. While running a clear app with the first solution took 13MB of my memory, using primitive ints took something under 7MB, so just the half of it.

SparseIntArray is just one of the cool collections that can help you avoid autoboxing. A map like Map<Integer, Long> could be replaced by SparseLongArray, as the value of the map is of type Long. If you look at the source code of SparseLongArray, you will see something pretty interesting. Under the hood, it is basically just a pair of arrays. You can also use a SparseBooleanArray similarly.

If you read the source code, you might have noticed a note saying that SparseIntArray can be slower than HashMap. I’ve been experimenting a lot, but for me SparseIntArray was always better both memory and performance wise. I guess it’s still up to you which you choose, experiment with your use cases and see which fits you the most. Definitely have the SparseArrays in your head when using maps.

7. OnDraw

As I’ve said above, when you are optimizing performance, you will probably see the most benefit in optimizing code which runs often. One of the functions running a lot is onDraw(). It may not surprise you that it’s responsible for drawing views on the screen. As the devices usually run at 60 fps, the function is run 60 times per second. Every frame has 16 ms to be fully handled, including its preparation and drawing, so you should really avoid slow functions. Only the main thread can draw on the screen, so you should avoid doing expensive operations on it. If you freeze the main thread for several seconds, you might get the infamous Application Not Responding (ANR) dialog. For resizing images, database work, etc., use a background thread.

If you think your users won’t notice that drop in frame rate, you are wrong!

I’ve seen some people trying to shorten their code, thinking that it will be more efficient that way. That definitely isn’t the way to go, as shorter code totally doesn’t mean faster code. Under no circumstances should you measure the quality of code by the number of lines.

One of the things you should avoid in onDraw() is allocating objects like Paint. Prepare everything in the constructor, so it’s ready when drawing. Even if you have onDraw() optimized, you should call it only as often as you have to. What is better than calling an optimized function? Well, not calling any function at all. In case you want to draw text, there is a pretty neat helper function called drawText(), where you can specify things like the text, coordinates, and the text color.

8. ViewHolders

You probably know this one, but I cannot skip it. The Viewholder design pattern is a way of making scrolling lists smoother. It is a kind of view caching, which can seriously reduce the calls to findViewById() and inflating views by storing them. It can look something like this.

static class ViewHolder {
    TextView title;
    TextView text;

    public ViewHolder(View view) {
        title = (TextView) view.findViewById(R.id.title);
        text = (TextView) view.findViewById(R.id.text);
    }
}

Then, inside the getView() function of your adapter, you can check if you have a useable view. If not, you create one.

ViewHolder viewHolder;
if (convertView == null) {
    convertView = inflater.inflate(R.layout.list_item, viewGroup, false);
    viewHolder = new ViewHolder(convertView);
    convertView.setTag(viewHolder);
} else {
    viewHolder = (ViewHolder) convertView.getTag();
}

viewHolder.title.setText("Hello World");

You can find a lot of useable info about this pattern around the internet. It can also be used in cases when your list view has multiple different types of elements in it, like some section headers.

9. Resizing Images

Chances are, your app will contain some images. In case you are downloading some JPGs from the web, they can have really huge resolutions. However, the devices they will be displayed on will be a lot smaller. Even if you take a photo with the camera of your device, it needs to be downsized before displaying as the photo resolution is a lot bigger than the resolution of the display. Resizing images before displaying them is a crucial thing. If you’d try displaying them in full resolutions, you’d run out of memory pretty quickly. There is a whole lot written about displaying bitmaps efficiently in the Android docs, I will try summing it up.

So you have a bitmap, but you don’t know anything about it. There’s a useful flag of Bitmaps called inJustDecodeBounds at your service, which allows you to find out the bitmap’s resolution. Let’s assume that your bitmap is 1024×768, and the ImageView used for displaying it is just 400×300. You should keep dividing the bitmap’s resolution by 2 until it’s still bigger than the given ImageView. If you do, it will downsample the bitmap by a factor of 2, giving you a bitmap of 512×384. The downsampled bitmap uses 4x less memory, which will help you a lot with avoiding the famous OutOfMemory error.

Now that you know how to do it, you should not do it. … At least, not if your app relies on images heavily, and it’s not just 1-2 images. Definitely avoid stuff like resizing and recycling images manually, use some third party libraries for that. The most popular ones are Picasso by Square, Universal Image Loader, Fresco by Facebook, or my favourite, Glide. There is a huge active community of developers around it, so you can find a lot of helpful people at the issues section on GitHub as well.

10. Strict Mode

Strict Mode is a quite useful developer tool that many people don’t know about. It’s usually used for detecting network requests or disk accesses from the main thread. You can set what issues Strict Mode should look for and what penalty it should trigger. A google sample looks like this:

public void onCreate() {
    if (DEVELOPER_MODE) {
        StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder()
                .detectDiskReads()
                .detectDiskWrites()
                .detectNetwork()
                .penaltyLog()
                .build());
        StrictMode.setVmPolicy(new StrictMode.VmPolicy.Builder()
                .detectLeakedSqlLiteObjects()
                .detectLeakedClosableObjects()
                .penaltyLog()
                .penaltyDeath()
                .build());
    }
    super.onCreate();
}

If you want to detect every issue Strict Mode can find, you can also use detectAll(). As with many performance tips, you should not blindly try fixing everything Strict Mode reports. Just investigate it, and if you are sure it’s not an issue, leave it alone. Also make sure to use Strict Mode only for debugging, and always have it disabled on production builds.

Debugging Performance: The Pro Way

Let’s now see some tools that can help you find bottlenecks, or at least show that something is wrong.

1. Android Monitor

This is a tool built into Android Studio. By default, you can find the Android Monitor at the bottom left corner, and you can switch between 2 tabs there. Logcat and Monitors. The Monitors section contains 4 different graphs. Network, CPU, GPU, and Memory. They are pretty self explanatory, so I will just quickly go through them. Here is a screenshot of the graphs taken while parsing some JSON as it is downloaded.

Android Monitor

The Network part shows the incoming and outgoing traffic in KB/s. The CPU part displays the CPU usage in percent. The GPU monitor displays how much time it takes to render the frames of a UI window. This is the most detailed monitor out of these 4, so if you want more details about it, read this.

Lastly we have the Memory monitor, which you will probably use the most. By default it shows the current amount of Free and Allocated memory. You can force a Garbage Collection with it too, to test if the amount of used memory drops down. It has a useful feature called Dump Java Heap, which will create a HPROF file which can be opened with the HPROF Viewer and Analyzer. That will enable you to see how many objects you have allocated, how much memory is taken by what, and maybe which objects are causing memory leaks. Learning how to use this analyzer is not the simplest task out there, but it is worth it. The next thing you can do with the Memory Monitor is do some timed Allocation Tracking, which you can start and stop as you wish. It could be useful at many cases, for example when scrolling or rotating the device.

2. GPU Overdraw

This is a simple helper tool, which you can activate in Developer Options once you have enabled developer mode. Select Debug GPU overdraw, “Show overdraw areas”, and your screen will get some weird colors. It’s ok, that’s what is supposed to happen. The colors mean how many times a particular area was overdrawn. True color means that there was no overdraw, this is what you should aim for. Blue means one overdraw, green means two, pink three, red four.

GPU Overdraw

While seeing true color is the best, you will always see some overdraws, especially around texts, navigation drawers, dialogs and more. So don’t try getting rid of it fully. If your app is blueish or greenish, that’s probably fine. However, if you see too much red on some simple screens, you should investigate what’s going on. It might be too many fragments stacked onto each other, if you keep adding them instead of replacing. As I’ve mentioned above, drawing is the slowest part of apps, so there is no sense drawing something if there will be more than 3 layers drawn onto it. Feel free to check out your favourite apps with it. You will see that even apps with over a billion downloads have red areas, so just take it easy when you are trying to optimize.

3. GPU Rendering

This is another tool from the Developer options, called Profile GPU rendering. Upon selecting it, pick “On screen as bars”. You will notice some colored bars appearing on your screen. Since every application has separate bars, weirdly the status bar has its own ones, and in case you have software navigation buttons, they have their own bars too. Anyway, the bars get updated as you interact with the screen.

GPU Rendering

The bars consist of 3-4 colors, and according to the Android docs, their size indeed matters. The smaller, the better. At the bottom you have blue, which represents the time used to create and update the View’s display lists. If this part is too tall, it means that there is a lot of custom view drawing, or a lot of work done in the onDraw() functions. If you have Android 4.0+, you will see a purple bar above the blue one. This represents the time spent transferring resources to the render thread. Then comes the red part, which represents the time spent by Android’s 2D renderer issuing commands to OpenGL to draw and redraw display lists. At the top is the orange bar, which represents the time the CPU is waiting for the GPU to finish its work. If it’s too tall, the app is doing too much work on the GPU.

If you are good enough, there is one more color above the orange. It is a green line representing the 16 ms threshold. As your goal should be running your app at 60 fps, you have 16 ms to draw every frame. If you don’t make it, some frames might be skipped, the app could become jerky, and the user would definitely notice. Pay special attention to animations and scrolling, that’s where the smoothness matters the most. Even though you can detect some skipped frames with this tool, it won’t really help you figuring out where exactly the problem is.

4. Hierarchy Viewer

This is one of my favourite tools out there, as it’s really powerful. You can start it from Android Studio through Tools -> Android -> Android Device Monitor, or it is also in your sdk/tools folder as “monitor”. You can also find a standalone hierarachyviewer executable there, but as it’s deprecated you should open the monitor. However you open the Android Device Monitor, switch to the Hierarchy Viewer perspective. If you don’t see any running apps assigned to your device, there are a couple things you can do to fix it. Also try checking outthis issue thread, there are people with all kinds of issues and all kinds of solutions. Something should work for you too.

With Hierarchy Viewer, you can get a really neat overview of your view hierarchies (obviously). If you see every layout in a separate XML, you might easily spot useless views. However, if you keep combining the layouts, it can easily get confusing. A tool like this makes it simple to spot, for example, some RelativeLayout, which has just 1 child, another RelativeLayout. That makes one of them removable.

Avoid calling requestLayout(), as it causes traversing of the entire view hierarchy, to find out how big each view should be. If there is some conflict with the measurements, the hierarchy might be traversed multiple times, which if happens during some animation, it will definitely make some frames be skipped. If you want to find out more about how Android draws its views, you can read this. Let’s look at one view as seen in Hierarchy Viewer.

The top right corner contains a button for maximizing the preview of the particular view in a standalone window. Under it you can also see the actual preview of the view in the app. The next item is a number, which represents how many children the given view has, including the view itself. If you select a node (preferably the root one) and press “Obtain layout times” (3 colored circles), you will have 3 more values filled, together with colored circles appearing labelled measure, layout, and draw. It might not be shocking that the measure phase represents the time it took to measure the given view. The layout phase is about the rendering time, while the drawing is the actual drawing operation. These values and colors are relative to each other. Green one means that the view renders in the top 50% of all views in the tree. Yellow means rendering in the slower 50% of all views in the tree, red means that the given view is one of the slowest. As these values are relative, there will always be red ones. You simply cannot avoid them.

Under the values you have the class name, such as “TextView”, an internal view ID of the object, and the android:id of the view, which you set in the XML files. I urge you to build a habit of adding IDs to all views, even if you don’t reference them in the code. It will make identifying the views in Hierarchy Viewer really simple, and in case you have automated tests in your project, it will also make targeting the elements a lot faster. That will save some time for you and your colleagues writing them. Adding IDs to elements added in XML files is pretty straightforward. But what about the dynamically added elements? Well, it turns out to be really simple too. Just create an ids.xml file inside your values folder and type in the required fields. It can look like this:

<resources>
    <item name="item_title" type="id"/>
    <item name="item_body" type="id"/>
</resources>

Then in the code, you can use setId(R.id.item_title). It couldn’t be simpler.

There are a couple more things to pay attention to when optimizing UI. You should generally avoid deep hierarchies while preferring shallow, maybe wide ones. Do not use layouts you don’t need. For example, you can probably replace a group of nested LinearLayouts with either a RelativeLayout, or a TableLayout. Feel free to experiment with different layouts, don’t just always use LinearLayout and RelativeLayout. Also try creating some custom views when needed, it can improve the performance significantly if done well. For instance, did you know that Instagram doesn’t use TextViews for displaying comments?

You can find some more info about Hierarchy Viewer on the Android Developers site with descriptions of different panes, using the Pixel Perfect tool, etc. One more thing I would point out is capturing the views in a .psd file, which can be done by the “Capture the window layers” button. Every view will be in a separate layer, so it’s really simple to hide or change it in Photoshop or GIMP. Oh, that’s another reason to add an ID to every view you can. It will make the layers have names that actually make sense.

You will find a lot more debugging tools in Developer options, so I advise you to activate them and see what are they doing. What could possibly go wrong?

The Android developers site contains a set of best practices for performance. They cover a lot of different areas, including memory management, which I haven’t really talked about. I silently ignored it, because handling memory and tracking memory leaks is a whole separate story. Using a third party library for efficiently displaying images will help a lot, but if you still have memory issues, check out Leak canary made by Square, or read this.

Wrapping Up

So, this was the good news. The bad new is, optimizing Android apps is a lot more complicated. There are a lot of ways of doing everything, so you should be familiar with the pros and cons of them. There usually isn’t any silver bullet solution which has only benefits. Only by understanding what’s happening behind the scenes will you be able to pick the solution which is best for you. Just because your favorite developer says that something is good, it doesn’t necessarily mean that it’s the best solution for you. There are a lot more areas to discuss and more profiling tools which are more advanced, so we might get to them next time.

Make sure you learn from the top developers and top companies. You can find a couple hundred engineering blogs at this link. It’s obviously not just Android related stuff, so if you are interested only in Android, you have to filter the particular blog. I would highly recommend the blogs of Facebook and Instagram. Even though the Instagram UI on Android is questionable, their engineering blog has some really cool articles. For me it’s awesome that it’s so easy to see how things are done in companies which are handling hundreds of millions of users daily, so not reading their blogs seems crazy. The world is changing really fast, so if you aren’t constantly trying to improve, learn from others and use new tools, you will be left behind. As Mark Twain said, a person who doesn’t read has no advantage over one who can’t read.

This article was written by Tibor Kaputa, a Toptal Java developer.

Advertisements

Scaling Scala: How to Dockerize Using Kubernetes


Kubernetes is the new kid on the block, promising to help deploy applications into the cloud and scale them more quickly. Today, when developing for a microservices architecture, it’s pretty standard to choose Scala for creating API servers.

Microservices are replacing classic monolithic back-end servers with multiple independent services that communicate among themselves and have their own processes and resources.

If there is a Scala application in your plans and you want to scale it into a cloud, then you are at the right place. In this article I am going to show step-by-step how to take a generic Scala application and implement Kubernetes with Docker to launch multiple instances of the application. The final result will be a single application deployed as multiple instances, and load balanced by Kubernetes.

All of this will be implemented by simply importing the Kubernetes source kit in your Scala application. Please note, the kit hides a lot of complicated details related to installation and configuration, but it is small enough to be readable and easy to understand if you want to analyze what it does. For simplicity, we will deploy everything on your local machine. However, the same configuration is suitable for a real-world cloud deployment of Kubernetes.

Scale Your Scala Application with Kubernetes

Be smart and sleep tight, scale your Docker with Kubernetes.

What is Kubernetes?

Before going into the gory details of the implementation, let’s discuss what Kubernetes is and why it’s important.

You may have already heard of Docker. In a sense, it is a lightweight virtual machine.

Docker gives the advantage of deploying each server in an isolated environment, very similar to a stand-alone virtual machine, without the complexity of managing a full-fledged virtual machine.

For these reasons, it is already one of the more widely used tools for deploying applications in clouds. A Docker image is pretty easy and fast to build and duplicable, much easier than a traditional virtual machine like VMWare, VirtualBox, or XEN.

Kubernetes complements Docker, offering a complete environment for managing dockerized applications. By using Kubernetes, you can easily deploy, configure, orchestrate, manage, and monitor hundreds or even thousands of Docker applications.

Kubernetes is an open source tool developed by Google and has been adopted by many other vendors. Kubernetes is available natively on the Google cloud platform, but other vendors have adopted it for their OpenShift cloud services too. It can be found on Amazon AWS, Microsoft Azure, RedHat OpenShift, and even more cloud technologies. We can say it is well positioned to become a standard for deploying cloud applications.

Prerequisites

Now that we covered the basics, let’s check if you have all the prerequisite software installed. First of all, you need Docker. If you are using either Windows or Mac, you need the Docker Toolbox. If you are using Linux, you need to install the particular package provided by your distribution or simply follow the official directions.

We are going to code in Scala, which is a JVM language. You need, of course, the Java Development Kit and the scala SBT tool installed and available in the global path. If you are already a Scala programmer, chances are you have those tools already installed.

If you are using Windows or Mac, Docker will by default create a virtual machine named default with only 1GB of memory, which can be too small for running Kubernetes. In my experience, I had issues with the default settings. I recommend that you open the VirtualBox GUI, select your virtual machine default, and change the memory to at least to 2048MB.

VirtualBox memory settings

The Application to Clusterize

The instructions in this tutorial can apply to any Scala application or project. For this article to have some “meat” to work on, I chose an example used very often to demonstrate a simple REST microservice in Scala, called Akka HTTP. I recommend you try to apply source kit to the suggested example before attempting to use it on your application. I have tested the kit against the demo application, but I cannot guarantee that there will be no conflicts with your code.

So first, we start by cloning the demo application:

git clone https://github.com/theiterators/akka-http-microservice

Next, test if everything works correctly:

cd akka-http-microservice
sbt run

Then, access to http://localhost:9000/ip/8.8.8.8, and you should see something like in the following image:

Akka HTTP microservice is running

Adding the Source Kit

Now, we can add the source kit with some Git magic:

git remote add ScalaGoodies https://github.com/sciabarra/ScalaGoodies
git fetch --all
git merge ScalaGoodies/kubernetes

With that, you have the demo including the source kit, and you are ready to try. Or you can even copy and paste the code from there into your application.

Once you have merged or copied the files in your projects, you are ready to start.

Starting Kubernetes

Once you have downloaded the kit, we need to download the necessary kubectl binary, by running:

bin/install.sh

This installer is smart enough (hopefully) to download the correct kubectl binary for OSX, Linux, or Windows, depending on your system. Note, the installer worked on the systems I own. Please do report any issues, so that I can fix the kit.

Once you have installed the kubectl binary, you can start the whole Kubernetes in your local Docker. Just run:

bin/start-local-kube.sh

The first time it is run, this command will download the images of the whole Kubernetes stack, and a local registry needed to store your images. It can take some time, so please be patient. Also note, it needs direct accesses to the internet. If you are behind a proxy, it will be a problem as the kit does not support proxies. To solve it, you have to configure the tools like Docker, curl, and so on to use the proxy. It is complicated enough that I recommend getting a temporary unrestricted access.

Assuming you were able to download everything successfully, to check if Kubernetes is running fine, you can type the following command:

bin/kubectl get nodes

The expected answer is:

NAME        STATUS    AGE
127.0.0.1   Ready     2m

Note that age may vary, of course. Also, since starting Kubernetes can take some time, you may have to invoke the command a couple of times before you see the answer. If you do not get errors here, congratulations, you have Kubernetes up and running on your local machine.

Dockerizing Your Scala App

Now that you have Kubernetes up and running, you can deploy your application in it. In the old days, before Docker, you had to deploy an entire server for running your application. With Kubernetes, all you need to do to deploy your application is:

  • Create a Docker image.
  • Push it in a registry from where it can be launched.
  • Launch the instance with Kubernetes, that will take the image from the registry.

Luckily, it is way less complicated that it looks, especially if you are using the SBT build tool like many do.

In the kit, I included two files containing all the necessary definitions to create an image able to run Scala applications, or at least what is needed to run the Akka HTTP demo. I cannot guarantee that it will work with any other Scala applications, but it is a good starting point, and should work for many different configurations. The files to look for building the Docker image are:

docker.sbt
project/docker.sbt

Let’s have a look at what’s in them. The file project/docker.sbt contains the command to import the sbt-docker plugin:

addSbtPlugin("se.marcuslonnberg" % "sbt-docker" % "1.4.0")

This plugin manages the building of the Docker image with SBT for you. The Docker definition is in the docker.sbt file and looks like this:

imageNames in docker := Seq(ImageName("localhost:5000/akkahttp:latest"))

dockerfile in docker := {
  val jarFile: File = sbt.Keys.`package`.in(Compile, packageBin).value
  val classpath = (managedClasspath in Compile).value
  val mainclass = mainClass.in(Compile, packageBin).value.getOrElse(sys.error("Expected exactly one main class"))
  val jarTarget = s"/app/${jarFile.getName}"
  val classpathString = classpath.files.map("/app/" + _.getName)
    .mkString(":") + ":" + jarTarget
  new Dockerfile {
    from("anapsix/alpine-java:8")
    add(classpath.files, "/app/")
    add(jarFile, jarTarget)
    entryPoint("java", "-cp", classpathString, mainclass)
  }
}

To fully understand the meaning of this file, you need to know Docker well enough to understand this definition file. However, we are not going into the details of the Docker definition file, because you do not need to understand it thoroughly to build the image.

The beauty of using SBT for building the Docker image is that
the SBT will take care of collecting all the files for you.

Note the classpath is automatically generated by the following command:

val classpath = (managedClasspath in Compile).value

In general, it is pretty complicated to gather all the JAR files to run an application. Using SBT, the Docker file will be generated with add(classpath.files, "/app/"). This way, SBT collects all the JAR files for you and constructs a Dockerfile to run your application.

The other commands gather the missing pieces to create a Docker image. The image will be built using an existing image APT to run Java programs (anapsix/alpine-java:8, available on the internet in the Docker Hub). Other instructions are adding the other files to run your application. Finally, by specifying an entry point, we can run it. Note also that the name starts with localhost:5000 on purpose, because localhost:5000 is where I installed the registry in the start-kube-local.sh script.

Building the Docker Image with SBT

To build the Docker image, you can ignore all the details of the Dockerfile. You just need to type:

sbt dockerBuildAndPush

The sbt-docker plugin will then build a Docker image for you, downloading from the internet all the necessary pieces, and then it will push to a Docker registry that was started before, together with the Kubernetes application in localhost. So, all you need is to wait a little bit more to have your image cooked and ready.

Note, if you experience problems, the best thing to do is to reset everything to a known state by running the following commands:

bin/stop-kube-local.sh
bin/start-kube-local.sh

Those commands should stop all the containers and restart them correctly to get your registry ready to receive the image built and pushed by sbt.

Starting the Service in Kubernetes

Now that the application is packaged in a container and pushed in a registry, we are ready to use it. Kubernetes uses the command line and configuration files to manage the cluster. Since command lines can become very long, and also be able to replicate the steps, I am using the configurations files here. All the samples in the source kit are in the folder kube.

Our next step is to launch a single instance of the image. A running image is called, in the Kubernetes language, a pod. So let’s create a pod by invoking the following command:

bin/kubectl create -f kube/akkahttp-pod.yml

You can now inspect the situation with the command:

bin/kubectl get pods

You should see:

NAME                   READY     STATUS    RESTARTS   AGE
akkahttp               1/1       Running   0          33s
k8s-etcd-127.0.0.1     1/1       Running   0          7d
k8s-master-127.0.0.1   4/4       Running   0          7d
k8s-proxy-127.0.0.1    1/1       Running   0          7d

Status actually can be different, for example, “ContainerCreating”, it can take a few seconds before it becomes “Running”. Also, you can get another status like “Error” if, for example, you forget to create the image before.

You can also check if your pod is running with the command:

bin/kubectl logs akkahttp

You should see an output ending with something like this:

[DEBUG] [05/30/2016 12:19:53.133] [default-akka.actor.default-dispatcher-5] [akka://default/system/IO-TCP/selectors/$a/0] Successfully bound to /0:0:0:0:0:0:0:0:9000

Now you have the service up and running inside the container. However, the service is not yet reachable. This behavior is part of the design of Kubernetes. Your pod is running, but you have to expose it explicitly. Otherwise, the service is meant to be internal.

Creating a Service

Creating a service and checking the result is a matter of executing:

bin/kubectl create -f kube/akkahttp-service.yaml
bin/kubectl get svc

You should see something like this:

NAME               CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
akkahttp-service   10.0.0.54                  9000/TCP   44s
kubernetes         10.0.0.1     <none>        443/TCP    3m

Note that the port can be different. Kubernetes allocated a port for the service and started it. If you are using Linux, you can directly open the browser and type http://10.0.0.54:9000/ip/8.8.8.8 to see the result. If you are using Windows or Mac with Docker Toolbox, the IP is local to the virtual machine that is running Docker, and unfortunately it is still unreachable.

I want to stress here that this is not a problem of Kubernetes, rather it is a limitation of the Docker Toolbox, which in turn depends on the constraints imposed by virtual machines like VirtualBox, which act like a computer within another computer. To overcome this limitation, we need to create a tunnel. To make things easier, I included another script which opens a tunnel on an arbitrary port to reach any service we deployed. You can type the following command:

bin/forward-kube-local.sh akkahttp-service 9000

Note that the tunnel will not run in the background, you have to keep the terminal window open as long as you need it and close when you do not need the tunnel anymore. While the tunnel is running, you can open: http://localhost:9000/ip/8.8.8.8 and finally see the application running in Kubernetes.

Final Touch: Scale

So far we have “simply” put our application in Kubernetes. While it is an exciting achievement, it does not add too much value to our deployment. We’re saved from the effort of uploading and installing on a server and configuring a proxy server for it.

Where Kubernetes shines is in scaling. You can deploy two, ten, or one hundred instances of our application by only changing the number of replicas in the configuration file. So let’s do it.

We are going to stop the single pod and start a deployment instead. So let’s execute the following commands:

bin/kubectl delete -f kube/akkahttp-pod.yml
bin/kubectl create -f kube/akkahttp-deploy.yaml

Next, check the status. Again, you may try a couple of times because the deployment can take some time to be performed:

NAME                                   READY     STATUS    RESTARTS   AGE
akkahttp-deployment-4229989632-mjp6u   1/1       Running   0          16s
akkahttp-deployment-4229989632-s822x   1/1       Running   0          16s
k8s-etcd-127.0.0.1                     1/1       Running   0          6d
k8s-master-127.0.0.1                   4/4       Running   0          6d
k8s-proxy-127.0.0.1                    1/1       Running   0          6d

Now we have two pods, not one. This is because in the configuration file I provided, there is the value replica: 2, with two different names generated by the system. I am not going into the details of the configuration files, because the scope of the article is simply an introduction for Scala programmers to jump-start into Kubernetes.

Anyhow, there are now two pods active. What is interesting is that the service is the same as before. We configured the service to load balance between all the pods labeled akkahttp. This means we do not have to redeploy the service, but we can replace the single instance with a replicated one.

We can verify this by launching the proxy again (if you are on Windows and you have closed it):

bin/forward-kube-local.sh akkahttp-service 9000

Then, we can try to open two terminal windows and see the logs for each pod. For example, in the first type:

bin/kubectl logs -f akkahttp-deployment-4229989632-mjp6u

And in the second type:

bin/kubectl logs -f akkahttp-deployment-4229989632-s822x

Of course, edit the command line accordingly with the values you have in your system.

Now, try to access the service with two different browsers. You should expect to see the requests to be split between the multiple available servers, like in the following image:

Kubernets in action

Conclusion

Today we barely scratched the surface. Kubernetes offers a lot more possibilities, including automated scaling and restart, incremental deployments, and volumes. Furthermore, the application we used as an example is very simple, stateless with the various instances not needing to know each other. In the real world, distributed applications do need to know each other, and need to change configurations according to the availability of other servers. Indeed, Kubernetes offers a distributed keystore (etcd) to allow different applications to communicate with each other when new instances are deployed. However, this example is purposefully small enough and simplified to help you get going, focusing on the core functionalities. If you follow the tutorial, you should be able to get a working environment for your Scala application on your machine without being confused by a large number of details and getting lost in the complexity.

This article was written by Michele Sciabarra, a Toptal Scala developer.

The Ultimate Introduction To Agile Project Management


The Brief

You’re in charge of delivering your company’s latest and greatest initiative that’s going to change the face of “Widgets International” forever. It’s a software project that’ll engage and enthrall your customers, make your colleague’s lives easier, and make the company millions in revenue. There’s a great deal of anticipation, fervour, excitement, and expectation. You need to get it done as quickly as possible, so your business can start to reap the benefits. The future success of the company depends on you. All eyes are on you. You cannot fail.

Agile project management

At first, you’re thinking to yourself “awesome, I’m up for the challenge. Let’s get this thing done!”. You pause for a moment, step back, and think to yourself “okay, so how do we do this?”. You start to talk to your colleagues and peers. You spend time searching for best practice software development and project management techniques, but the options and approaches are countless. There are acronyms and methodologies aplenty. Notable ones rise to the top. Doubt creeps in. Which one should we use? How can I guarantee success? What if I make the wrong decisions?

When it comes to managing software projects, there’s a heady mix of options supported by a myriad of opinion. Voices from the corners of the room whisper “try doing it this way”, others shout “this is the only way to do it”, and the rest just whimper “don’t manage it at all, just get on with it”. In reality, all those voices speak some truth. But what’s important is working out what’s right for your needs, your team, your business, and your customers.

Setting the Scene

There was a time when software project management sat squarely in one of three camps. There were the heavy frameworks that let you make decisions on how you execute and deliver, while offering a structure to maintain control and governance. There were prescriptive sequential methodologies like waterfall that forced you to plan lengthy projects, understand and commit to all your requirements, design and sign off complex systems, write lots of code, and then test (all before your customer gets to see it for the first time). And finally, the less prescriptive but iterative software development life cycles (SDLC) that encourage rapid prototyping or larger systems to be designed, built, and delivered in incremental steps, each building on top of the other.

Agile software development and Agile project management were born out of the inadequacies of the waterfall and the benefits of the iterative approaches to software delivery. They can trace their roots to the 1950’s, thought leadership in the 70’s, maturity in the 90’s and adoption through the 00’s. In 2001, a group of practitioners and experts created the agile manifesto, aimed at defining 4 values and 12 guiding principles that seek to embody the spirit of Agile software development and to encourage its evolution. And it has definitely evolved.

Now, simply calling something Agile isn’t particularly helpful. The word, even in a software context, means different things to different people or organizations. There are many facets, definitions, implementations, and interpretations. Each body that embraces agile tends to try to give it its own definition.

Simply calling something Agile isn’t particularly helpful.

Suffice to say that Agile software development and project management are a group of related behaviours, frameworks, techniques, and concepts that fundamentally favor the delivery of the right working software as early and as frequently as realistically possible.

I mentioned earlier that Agile, as applied to software development or project management, are different things. In a nutshell, Agile software development takes care of developing great software in a business as usual (BAU) or project context. Agile project management, on the other hand, takes care of the governance and control required to deliver complex projects including but not limited to software.

There are many Agile software development methods available, such as Scrum, Kanban, XP, and Lean Software Development. But just as the game of rugby is about more than the scrum, so is Agile. In isolation, these Agile paradigms do not address the full lifecycle of project management required in complex projects such as governance, resourcing, financial, explicit risk management, and many other important project management concepts. For these, you might want to consider PMI Agile or PRINCE2 Agile – think of it as “Governed Agility”.

Scrum and agile project management

Why Do We Need To Be Agile?

Long ago, we roamed the land to gather food and shelter to survive. They were simple needs, but pretty agile. Some time later, countries and economies grew and prospered on the back of the Industrial Revolution. This was the birth of management and control and the loss of agility. Now we’re in the Information Age or Revolution, where businesses employ knowledge workers. Knowledge workers are you, your partners, your colleagues and peers that endeavor to create great solutions to customer, business, social, economic and world problems. Knowledge workers apply analysis, knowledge, reasoning, understanding, expertise, and skills to often loosely defined and changing needs. These businesses and workers need methods and techniques that cannot be met by old Industrial Age processes and procedures. Agile supports interactions.

Virtually no software project can confidently set out at the beginning and know all that it needs in order to deliver valuable working software without change. Change presents both opportunities and risks to the success of a project. Unmanaged opportunities can mean the difference between a great company and an awesome company. Unmanaged risk spells disaster and ruin. Agile manages change.

Adopting agile allows you to be responsive to changing or new requirements. It empowers development teams to be the experts and make decisions supported by an engaged, trusting, and informed business. It enables you to deliver to customers what they really want. Ultimately, it puts you and your organization in control of delivering high quality valuable software that delivers on customer need and expectations whilst extracting a return on your investment dollars as early as possible. Agile creates value.

There is a cost to adopting agile. It doesn’t come for free. Transforming to an agile approach for software delivery can be a hard path to follow. However, if you internalize the agile philosophy, tread carefully, engage the right team with the right attitude, break things down, make it achievable and realistic and respond to feedback, you will reap rewards. Agile emphasizes collaboration.

The following lists some benefits you can expect:

  • Speed to market
  • Earlier revenue generation
  • Regular delivery of real value
  • Protection for your investment
  • Data, data, data
  • Better product quality
  • Manageable expectations
  • Greater customer satisfaction
  • Higher performing teams
  • Improved visibility on progress
  • Predictability, transparency, and confidence
  • Manageable risk

“Success is not final, failure is not fatal: it is the courage to continue that counts.”

Winston Churchill may never have actually said this, but I think it’s a pretty good summation of Agile. We know Agile is the best foot forward for most projects. It encourages you to strive for success, but we always iterate and keep building on it. Agile will encourage you to fail, but fail early and move on. Having the courage to continue and to build the right solution based on insight informed by your customer is what brings the reward.

The thing to keep in mind is you can tailor Agile to your needs. Use the method and governance that is right for your business. Wherever you start, be true to the content, context, and spirit of the method you use – keep it vanilla. If you’re just starting out – Learn. If you’ve been doing it for a while – Understand. If you’re becoming awesome – Apply. Finally, if your business and your projects are complex and interdependent – Govern. Over time, you and your teams will figure out what works best for your business.

Feasibility

So now you’re thinking “okay, I get it. How do I start? Where do I start?”. Well, with all good things, we start at the beginning. And with Agile, it’s by asking yourself “What business value do I want to deliver?”. After all, that’s why we undertake projects, to generate business value. In order to establish if the project is worth undertaking to derive the business value, you need to understand whether it is feasible.

Vision

Is your project projected to increase revenue, enter a new market, acquire more customers, improve customer perception, or make life easier for a given problem you’ve identified? With this in mind, you can state your “Vision”.

  • Your vision may come from different sources – your own bold startup to fix a common problem, business management strategy, your CEO’s pet project, a specific product team, or even your customer’s needs.
  • Try to take a step back from your own shoes and “see” what the future looks like with your new product or service in the hands of your customers.
  • Engage your stakeholders – the CEO, product guy, and customers. Workshop it, don’t attempt this in isolation. Challenge assumptions and validate arguments.
  • Write it down, keep it short. Focus on the business value.
  • Refine it until you all agree the vision resonates with everybody and meets a common interpretation that states where you’re heading.
  • Your vision, if valid, rarely changes. How you get there most certainly will.

People don’t buy what you do, or how you do it. They buy the “why” you do it. This is what creates the emotional connection between your business and your customers. The vision will help illustrate this.

Is it feasible?

Feasibility comes in at least a couple of shades. Typically, you’ll want to understand if your vision of a brighter future for your business and customers is both technically feasible and that it’s feasible for your business to make it happen.

  • If your vision is to make travel to anywhere across the world in under an hour, you may have a problem with the technical feasibility. Since science, physics, and technology haven’t quite caught up with that dream yet, your technical solution may not be viable in anything other than theory. In addition, if your solution was new, this would go well beyond the idea of a Minimal Viable Product (MVP).
  • To test the technical feasibility of your product, consider either exploring it further in a Discovery prototype project or by running a spike in the early stages of the project. You’ll know which method to use by thinking about the scale or complexity of the solution you have in mind.

    “Some of the best knowledge my teams have gained in understanding technical feasibility have come from performing a spike. And often, it’s the simplest solution that wins out!”

  • The second shade of feasibility to consider is whether you, your team or business has the skills and motivation to make it work. Using an example, if you’re great at baking cakes at home for your kids birthday, that’s sweet. But if you want to turn this into a business selling the finest cakes to the world, you need to understand if you can make it scale, handle the business as well as the production, manage distribution and fulfillment, and take care of customer service.
  • This type of vision might be achievable in the long run. But for now, possibly not. So scale it back, think small, take a small chunk that looks realistic and concentrate on delivering the best but smaller aspiration you can. If that manages to engage and delight your customers, get them coming back for more and telling their friends, then scale it up from there using your customer feedback as your guide and compass.
  • Also, you need to know if your project is feasible in terms of budget and timeframe. Can your business afford to deliver this project ? Is the timeframe achievable? Time and money are two of the three constraints in an Agile project that are fixed. We aim to deliver within a given fixed time and within a given fixed budget.
  • The quality of a product refers to the end product that your customers use and the engineering practices your team uses to deliver great, robust, and reliable software. Quality is also something we don’t short change on. Quality criteria, on the other hand, can change. If you’re not setting out to build a Ferrari, the product may not have a high quality perception. If you’re not building space rockets, then the tolerances attained in production terms may be much higher. Set the appropriate tone and expectation early on.

So now you’ve confirmed your dream is more than chocolate fancy, set about testing your assumptions, and proving to people that this endeavor is worth investing in.

Justification

Now depending on your circumstances, justification will come in different forms. But essentially, you want to prove that this project will satisfy customer success criteria, has a chance of success, will deliver value, and is affordable.

  • State your assumptions based on your customer need, then validate them. The Lean Startup gives great guidance on identifying and proving that your product is needed by your customers and will create value.
  • Write, test, and validate your business plan. Now this looks nothing like the ones your bank or Business and Finance major told you to produce. Don’t use them, they will be out of date before the ink is dry. Instead, check out the Business Model Canvas. This is essentially a short form business plan that keeps your focus on your value proposition, your customers, revenue, and costs. Use it to validate if you have a business that will work.

    “I ignored this advice once and spent a long time writing a lengthy traditional 50 page business plan. It got me nowhere. All the assumptions I had made were unfounded, and all the projections I made couldn’t be validated. It was a painful and expensive experience that taught me to never do it again.”

  • If you’re in a mature business with portfolios of projects being delivered in a complex environment, then financial modeling may be necessary. If you must, do this only after you’ve proven the above.
  • Once you’ve built your MVP, there may be a case for creating a more traditional business plan. For example, if you have to go for funding or selection within your company’s portfolio of competing projects and resources. But this will be a business plan based on and informed by the tools used above. It will be lighter too.
  • In any case, use these tools as living, breathing artifacts. Use them as your guide and bellwether. They are never static. Refer to them and revise them as your project or business evolves.

Once you have your justification and all your stakeholders are onboard, you’ll be on fire.

The Feasibility phase is typically performed once in the life of your project. You may find you revisit the vision and feasibility of the project, especially if your data, customers, market or business indicate so. At the very least, they will be your guiding lights throughout.

Initiation

Awesome. The decision has been made, the project has the green light and you’re ready to build. Well, nearly. I know you’re thinking, “c’mon already, really? If we don’t do this now we never will. Let’s get this show on the road!”. But consider this – Agile is nothing if not about delivering value early and often whilst delighting your customers along the way. Taking some time to figure out the best way to deliver your project is the best laid foundation for success.

The Team

In sport, by thinking about your favorite team game you’ll be able to recognise key roles that enable the team to perform as they do. Traditionally you’ll find a manager, a captain, and the rest of the squad. Outside of that, you’ll find coaches, physios, nutritionists, and an assortment of supporting staff. But if we look at the game of rugby, there’s a team within a team. The players that make up the scrum. This pack is made up of designated players whose job it is to win the ball back and continue play. When a scrum is in play, the players from each side then work, with no leader, as a single unit as collaboratively, communicatively, and efficiently as possible to get the ball back in possession. It is the game of rugby that inspired Jeff Sutherland to name his software development methodology, Scrum.

  • Scrum is not the only software development method in the Agile playbook. But it is the one that best describes the Agile concept and behaviors of working as a team, motivating individuals, creating trusting relationships, self-organization, servant-leadership, communication, transparency, and collaboration.
  • Your team will be formed largely by the circumstances you find yourself in. You may have developersavailable to you, some, none, or all of them may be familiar with Agile to varying degrees. You may want to hire a new team or partner with a 3rd party.
  • Other roles will be required too, but we’ll discuss those later.
  • It has been said that if your form your development team, then you’ve chosen your technology. As depending on where you bring your team together from, they’ll come with certain skill sets. So, think carefully how you form your development team and whether you need to perform a technical evaluation before you get to this point in your journey.
  • This brings us to cross functional teams. Teams work best when they work together, when individuals pitch in to get the job done regardless of their “title”. Try to build a team that is self sufficient and individuals that take on more than one role.
  • Build an environment, culture, and relationship center. A place where the team can deliver, unencumbered by constraints or restrictions. Give the team the tools, people, resources, and space to be effective and performant.
  • Keep team sizes to no more than 7 or 8. If you have a need for many more developers, break the teams down. Each team might then be responsible for a given functional area. If you you have multiple teams in multiple locations, consider holding a Scrum of Scrums. And where these are numerous in complex environments, use Agile project management.
  • Ensure that the team, business, stakeholders, and even customers have access to each other. Ensure they communicate and collaborate, and remove anything that gets in the way of progress. Daily communication is the best cure for project ailments. When people speak, they get stuff done.

There are many ways a team can be put together to deliver software.

Project Brief

In Feasibility, you figured out the “why” of your project and either built your confidence to forge ahead with your startup or got backing to proceed. The project brief is the living document that brings together the “why” with the “what” and “when” and “who”. It’s living, because as you progress from hence forward your knowledge, understanding, and path may change. To leave this document once written and never to return to it, just consign your thoughts to a point in time. In an Agile world, your point in time reference may change weekly or even daily early on, so it’s important to keep this fresh.

  • A great tool for encapsulating and maintaining your project brief is something that Jonathan Rasmusson calls the “Inception Deck” in his book “The Agile Samurai”. Here you’ll find great advice on ensuring that everybody that is interested in, affected by, or involved with your project is on the same page.
  • The greatest enemy to delivering projects is in having an unclear, inconsistent, or just plain different understanding of what the project is and what “requirements” are to be satisfied. If even one important stakeholder has a different understanding or view of what you’re doing, the consequences can be substantial.
  • A good project brief communicates:
    1. A common and agreed expectation between stakeholders and team members.
    2. An understanding of the project, with the same understanding across all parties.
    3. The goal, vision, objective, scope, and project context.
  • You’ll have a lot of good information for the brief gathered from Feasibility. The project brief will help you define and find the answers to searching questions. It will bring together stakeholders, your raison d’etre, high level scope, risks, target solution, budget, timeline, expectations, and your priorities.

“A colleague stopped me in a corridor once and asked me where he could get the project brief for the project. I quipped ‘We don’t need a brief, we’re Agile’. He looked confused, as if he was questioning my sanity or authority. He was right to do so.”

Before you proceed, ensure you’ve got everybody on the same page, workshop it, ask the difficult questions, and nail it somewhere where people can stop, read it, comment on it, and help revise it.

Culture and Ways of Working

You know the way your business works and its culture, the way it likes to get stuff done. Agile, by its very nature, may challenge some of these ways of working that your business has cultivated over the years. Don’t expect Agile to be implemented and for everybody to lovingly adopt it from the outset. Some people may find it confusing and view it only with dread and fear. Some people may openly refuse to engage in it. These are challenges and perceptions you must overcome. But in your early days, don’t go around waving the Agile stick beating anyone that won’t listen with it. That won’t build trust, adoption, or engagement.

“I was a fan of waving big proverbial sticks once, and it earned me a lot of negative press. I turned it around, but not before suffering considerable pain first.”

As you set out on your path of adoption, tread carefully, respectfully, and with empathy. If you’re in a creaking old traditional business, perhaps it won’t be the best approach to get the whole business aligned. Start small and incrementally earn respect and recognition. Start with your team only. Once you start delivering quicker software with better quality than ever before, people will start to notice and will want to come play your game. When they do, offer them the ball, take them out for a coffee, and ease them into your new world. Help them.

With your team, now that they know what the project is about and your plans for Agile adoption agreed, let the team decide how they wish to behave and operate as a team.

  • Guide your team to identify the Agile concepts, techniques, behaviors, and frameworks that you feel fits your collective needs.
  • Take requests from your team members as to what requirements they have to help them perform as best they can. Some of these requests you’ll be able to resolve immediately. Others, you may need to get budget or outside help. Do what you can to make it happen.
  • These are your first steps to becoming a true servant-leader.
  • Consider organizing some appropriate training in the concepts and techniques your team are choosing to adopt. This is the best way to ensure all of your team, even stakeholders, are on the same page and get the same message. Work with a supplier organization that can tailor their offering to your needs.
  • Be prudent. Nobody will be an Agile ninja after a few days in a workshop learning how to become Agile. Your path will be long. The word “become” is quite defining. Only when you truly embrace Agile will you see its value. It should excite you. If it excites you, then go excite others too.
  • Now that your team has agreed the concepts and techniques, had their wishes fulfilled and in Agile training, turn your team’s attention to themselves and what they expect from you, the business, and each other.
  • Define some Ways of Working (WoW) within and by the team helps build trust, relationship, and expectations. The WoW is no War and Peace. It should be short and to the point, between 7 and 10 bullet pointed sentences. These sentences state clearly how people behave, communicate, collaborate, support, deliver, and perform together. It should also state what the team expect from the business.

  • Agile is as much a mindset as it is guiding principles and concepts. It helps you develop in the way you behave, think, negotiate, interact, communicate, perform, and improve. It relies on motivated individuals supporting each other to reach a common aim, together as one. There is an African proverb:
If you want to go quickly, go alone. If you want to go far, go together.

By now your team should be super excited, energized and motivated. Now engage them further with your backlog of User Stories.

Backlog

Have no doubt in your mind that there’s uncertainty involved in your project. You can’t possibly know exactly what it will take to build the right product for your customers so early in its life. You cannot gaze wistfully into a crystal ball and predict the future.

The “backlog” or “product backlog” is where your requirements live. Agile favors the writing of short pithy statements that capture the essence of a “requirement”. The backlog is simply a long list of entries, each entry defining a single, discrete “requirement” as a User Story. And from now on, we’ll be using the word User Story, and not “requirement”. You’re probably asking “why?”. That’s a good question. For life eternal, stating the features or facets needed in a software project by a customer, have always been referred to as a “requirement” (ah, i used it again! Last time. Promise) That word has an interpretation that has no value in Agile. The Oxford dictionary defines it as:

“A thing that is needed or wanted.” Or, “A thing that is compulsory; a necessary condition”.

And unfortunately, if we set out defining what our solution should be, stating that things are “compulsory”, we will end up in trouble. It’s too easy to say that all these User Stories are compulsory. If we take that view, we run the risk of running over schedule and over budget in the attempt to deliver all of a given scope. It’s not a problem to say that, for this product these stories are needed for the solution to be viable, we just want to avoid the interpretation of that particular word.

  • Always write stories from the perspective of a persona. A persona represents a user or stakeholder of the solution. It’s a good idea to develop these persona before you start a backlog.
  • At this stage, only write short simple statements that basically suggest a reminder to have a deeper conversation about the User Story when the time is right.
  • Real people think in terms of tasks that they need to get done to achieve a goal. Write your stories from the persona perspective and in terms of what they need to get done.
  • You don’t need to write the full backlog, just write as much as you can imagine your customers will need for your product to be viable.
  • You’ll discover later on through the life of the product that User Stories will change, become more or less important, or can be deleted completely. Releasing often, getting feedback, and assessing what’s a priority will inform this behavior.
  • Don’t write stories in isolation. Engage your team, stakeholders, even your customers. Stories can be updated at any time in the life of a project, but should avoid being changed once development work has started on them.
  • Some of your stories may be considered as “Epics”. These are large stories that cover a lot, and will be broken down closer to the time of delivery into smaller stories.
  • Consider using the INVEST model, a checklist for validating the quality of a good User Story.
  • Anybody can add a story to the backlog. It should be placed at the bottom, or in a specially created “Parking Lot”. This added story serves as a prompt to discuss with the team and the business. If the business and team support it, it can then be estimated and prioritized
  • You may also consider those parts of the system that are most risky. If you have a User Story or feature that is complex, new, or technically unknown, prioritize these to the top of you backlog. This way you won’t be attempting to deliver the challenging and critical parts of your product just weeks before your first release.

Once you have a backlog that fulfills your needs, you can estimate the stories in it, rank them in order of priority, and build a release plan.

Hi Level Estimation and Prioritization

Hi Level Estimation is the process of sizing your backlog. How “big” is the project, and what value does it deliver? Prioritization is the process of deciding which stories are most important to you, the viability of the product, and the interests of your customers. We want to deliver the highest value items earliest to deliver the most value to the business, get feedback from the customer, and to not sweat the small stuff. The output will be an ordered backlog that is ranked in priority and sized.

  • Stories at the top are considered most valuable. We want to deliver the most valuable items as early as possible.
  • There are many techniques for sizing and estimating, but at this point you just want to get a good indicative feel of the size of a story.
    • Use t-shirt sizes, relative sizing, ideal days, or story points.
  • You won’t have all the available information at this point either, and that’s okay. Just run with it.
  • Engage your business stakeholders or product manager if you have one, and the team that will be doing the work.
  • We want those that will be designing, developing, and testing the work to size it, because the best people to estimate are the experts.
  • The team may start to break stories down into smaller parts. If this happens, write stories that are more granular but discrete.
  • The team may also start to rank some stories, as naturally some things have to get delivered before others to support the technology or a given user journey.
  • Between you and the team, you may also start to find holes in the backlog that need to be filled. Just fill those holes with new stories and estimate and prioritize as appropriate.
  • Prioritization is most easily performed using a MoSCoW analysis. MoSCoW is a simple technique that helps you decide which stories are “must haves” for your product to be successful.
  • You may do a prioritization pass before estimation begins. However, the sizing of certain elements may also determine a decision on priority and real business value. So play estimation and prioritization off of each other, but don’t squabble over it!
  • No two stories can be as important as the other. The story at rank 1 is more important or valuable than the story at rank 2.
  • A great way to demonstrate the importance or value of a story is it to add a monetary value to it. If for example, story A is thought to bring in $5000 of extra revenue, and story B might attract $100, then story A is more valuable. Equally, if story C saves the business more than story D, story C is more valuable.
  • Once you’ve sized your backlog, you’ll be left with a number. When we come to release planning, that number will help us understand how much can be delivered by our team within a given timeframe.

Remember that you don’t need to know all your user stories up front. Also, remember that it’s not necessary to deliver all of your stories before a customer sees your product. You want to remain Agile – and that means only creating what you need to when you need to, wasting as little as possible, and responding to changes in customer needs and market conditions. A roadmap will help you lay out your product and plan your objectives for the next 3, 6, 9, and 12 months.

Roadmap and Storymaps

A roadmap is exactly as it sounds, it offers the same as a roadmap of a country. It details the relative position of cities (or in your case, features) to each other and the routes that can be taken to get from city A to city B, or feature X and feature Z. It doesn’t tell you which route you should take, or how you should get there. It doesn’t tell you which mode of transport to use, but it might illustrate options to take the Highway or the train.

In a city, there are many roads, buildings, parks, services, and facilities. All features of a city. This is also true of the roadmap for your product. At this level, your roadmap shows major goals or milestones to be achieved. A goal is a logical grouping of themes, features, and User Stories rolled up in a consumable view that demonstrates tangible value. The roadmap of a software product shares this view and communicates your intent. It doesn’t necessarily show you how or when features will be delivered, only the relative value of the goals and features to you and your business.

One great way to demonstrate a roadmap is to generate a story map. This tool indicates customer valued prioritization. It lays out the backbone, or essential building blocks of your product. The walking skeleton hangs off the backbone and illustrates the features that make it a MVP. All the other features are what add further value and importance to the system. The story map lays features in relative position to each other and is an awesome visual tool.

It’s worth noting that after carrying out a story mapping exercise, your backlog may need to be refined. This will be apparent where stories have been split into multiple stories, identified as redundant, newly created, or as a higher or lower priority than previously thought. The story map is another artifact that is continually revisited and revised.

The Initiation phase is typically performed once in the life of your project. However, many of the tools and documents you created will be revisited and revised throughout the project.

Release Planning

“At last”, I hear you cry, “finally some planning”. Well, you have essentially been planning all through the Feasibility and Initiation stages, we just didn’t call it as such. This is evidence of iterative or adaptive planning – the art of only planning enough to achieve your immediate and most valuable goals. We’ll see later more about adaptive planning, but for now release planning is our focus.

Your release plan may well be determined by external events. Perhaps there is a trade show you want to demonstrate your app at, or your customers will get most benefit using your app on the run up to Christmas. These are timeline events that your goals may be aligned with. You would most likely plan to deliver user stories or features that make the most sense to facilitate these events. If there are no external dates that you need to consider, then you can just go with feature prioritization and delivering earliest what makes most sense and delivers the most value to your customers.

  • If you created a story map in Initiation, this will help guide your release plan. Use it to identify your MVP, the minimum feature set that will get your product in the hands of your customers, start earning revenue, get feedback, and acquire more customers.
  • The story map will help you carve out future releases too. But keep in mind that as you learn, get feedback, inspect and adapt, future releases may change. So don’t plan in great detail.
  • You may have from 2 – 4 releases in a 12 month period. Don’t do less because your first release is your MVP and gets your foot in your door, after which you’ll want to iterate and release more features and fix bugs in a regular cycle. Don’t do more unless you’re performing well and have plenty of Agile techniques and tools in place to manage continuous delivery.
  • Each release is a timebox which is broken down into smaller iterations. An iteration is a timebox. The timebox is one of the most important control measures in Agile.
  • To size your release:
    • Divide your release timebox by 2. This will give you how many iterations you have. So if you have a release of 12 weeks, you get 6 two week iterations.
    • Then remove two iterations – you’ll reserve one for a “Sprint 0” iteration and another for a “Release Iteration”. This leaves you with 4 development iterations.
    • Work with your team and product owner to fill each iteration with stories, starting from the top of the backlog and working down. When the team thinks they’ve filled an iteration with enough stories they can realistically achieve based on their capacity in the 2 week time frame, repeat for the next iteration(s). Use the release plan and story map to guide what goes into each iteration.
    • Do not plan the next release yet. You’ll do that as you near the end of the current release.
    • Take the user stories from each of your iterations and add up the story sizes. So if your iteration 1 has 25 story points, but iterations 2, 3, and 4 have 10, 45, and 65 points respectively, you will need to refactor. Target an equal number of story points in each iteration. This becomes your anticipated velocity – the amount of valuable “stuff” completed for each iteration.
  • The team may not have worked together before. They are almost certainly working on a new problem or product. They will not perform at their best from day one. For this reason, your velocity may be volatile in the first few sprints. But as the team settles down, it should stabilize. Use this data to refactor your release planning which in turn helps you plan your product with a known velocity and confidence.
  • If necessary, break stories into smaller chunks and resize.
  • Your release plan, especially early in the life of a project and a new team, is only ever a guide. Do not treat it as a commitment or guarantee that all or these exact stories will get delivered as planned. As your team matures, work gets done and trust and confidence builds, so will the accuracy of your plans.
  • Never force your team to “commit” to more than they are comfortable with. Trust their judgement and their expertise.
  • Future release planning exercises will be simpler, because you’ll take the release size, multiply the number of iterations by your team’s velocity, and fill the release plan with the stories that add up to velocity x number of iterations.

Consider this as an example. If you go to a restaurant to eat, you wouldn’t go ahead and order all the items on the menu and expect to eat it all in one sitting. You’d never be able to eat it all, you might not afford the cost, you’ll be sick of food, and the restaurant may close whilst you’re eating the 5th of 19 courses! You may not leave a happy customer, the restaurant may not find you to be a great customer, and the experience will be bad all round. More likely, if you love the restaurant, it’ll be because you enjoyed a lovely meal there once. You’ll decide to go back and enjoy a different meal. You’ll tell your friends, you’ll go there often. The moral of the story is:

  • Plan your releases in small chunks.
  • Consider your capacity.
  • Don’t take on more than you can realistically achieve.
  • Revisit the plan often to see what you can change and improve.
  • Plan, Execute, Inspect, Learn, Adapt – Repeat.

Release planning takes place often in a software project. Each new release requires a release plan. The release plan can also be refactored at any time during a project. Just take care to not overdo it and fall into zombified planning psychosis. At the end of release planning, you’ll want to prepare for the first iteration, which is where we’re happily going next!

Product Iterations

Your team is in place, they’re motivated, you have an engaged business, your initial planning is done – you’re now ready to build your dreams.

We’ve talked earlier about some of the tools, techniques, and concepts that Agile subscribes to. There are many resources already available that do a great job at laying the foundations for delivering an Agile software project. Pick one, keep it vanilla, and grow into your Agile journey. To help shorten the trauma in deciding the right Agile software development methodology to start with, I’d recommend Scrum. And only Scrum. The temptation will be there to use elements of other methodologies. Don’t do it yet. Save that kind of change until you have lived and breathed Scrum for 6 or 12 months. Then, if either you’ve determined it alone does not work for you or you want to mature as a team, steadily introduce new methods, techniques, or frameworks.

I choose Scrum as the recommended approach for new team’s adopting Agile because it has all the basics built in. It’s very popular and has many good quality communities and resources online, in books, or in the training room. It will serve you well, even for the smallest of teams. The rest of this post is dedicated to discussing some important aspects of software delivery that you, your team, and stakeholders should always keep in mind.

Adaptive Planning

Planning in an Agile project is an ongoing process. We do some initial planning up front, just enough to understand what we know at a given point. Our initial plans will be loosely defined and flawed. And then we iterate our planning, adapting to new information, planning in greater detail just before we enter into delivery, responding to changing scope. This is one way of minimizing waste – only putting effort into planning when we need to.

  • The team and the business, or its informed and authorized representative such as a product manager, actively plan together. The team because they are the experts that will deliver, the product manager because he is the expert who can guide the needs of the business.
  • Estimates for user stories will be less accurate the further out they are from being developed. For example, epics will attract high level estimates that will be based on a lot of unknowns. Well defined granular user stories that are estimated just before a sprint starts will be much more accurate.
  • There are many estimation “scales” you can use. Use the technique that feels most right for your team and the right stage of planning – wide band delphi, planning poker, ideal time, relative sizing, story points, or t-shirt sizes.

  • When sizing a story, one size really does fit all. All stories should be sized using the same technique and encompass all elements such as design, development, testing, refactoring. When you come to do iteration planning for a sprint, certain tasks can be created which all contribute to the completion of the story.
  • Always factor risk, unknowns, outside influences, your team’s capacity, and ever improving velocity in planning.
  • If user stories that were taken into a sprint were not completed, we do not extend the timebox. These unfinished or unstarted items are put back to the top of the backlog and taken into the next sprint.
  • Always plan to deliver the least amount required to achieve a given goal. Identify techniques to prune features. Reduce waste, find the real value that can be realistically delivered with your time constrained resources.

Story Creation

Stories get elaborated upon when you need them. You don’t need full story explanations for features that are 6 months away from being delivered. Writing them at the beginning may be wasted effort, when that need disappears from scope. Write your stories at most 2 iterations before they are needed. Reducing that timeframe to 1 sprint is preferable.

Let’s take your time in Sprint 0 to set the scene. In two weeks you’ll start development. It’s now time to take enough of the stories from the top of the backlog that could potentially get delivered on sprint 1. You might take 10 or 15 percent more if you’re unsure on your velocity. These one liners can now be expanded into truly valuable documents with scenarios, acceptance criteria, and wireframes. If the wireframes haven’t been created yet, now’s the time to do so. You might find as you review these candidate stories that they need to be broken down. Perhaps they were Epics that couldn’t possibly be completed in a sprint. If you break stories down, re-estimate them with the team.

A good story follows the following rules: – Written in a common format, e.g. AS A I WANT SO THAT . – Includes acceptance criteria that the story must meet to be considered acceptable to the business for sign off. – Uses language that the business and your customers understand. – Follows the INVEST model. – Includes all supporting documents to inform the developer, designer and tester: wireframes, technical design overview, other assets. – Meets your standard Definition of Done criteria.

Sprinting

Whether you call it a sprint, an iteration or a timebox, each incremental delivery of your Agile project is time constrained. The timebox doesn’t shorten and it doesn’t lengthen. Focus on 2 week iterations at the beginning. You might find that 1, 3 or 4 weeks suits your business model better. Once you choose a length, don’t change it. You want to maintain a regular cadence, or a sustainable pace. This means the team and business focus delivering regular software without mad dash overtime being employed to get the job done and releasing potentially shippable increments every two weeks.

  • Working in small increments has a huge benefit. It means you’re only focusing on the immediate future of delivery; and with new input, feedback and learning, you can respond to change within a short iterative cycle.
  • At the beginning of a release, perform a sprint 0. This lets the team, the business, and your project release get geared up and sets the mindset for successful delivery. Draw out the base technical framework and architecture that will support the foundation for the release. Setup environments, accounts, and tools. Perform spikes to understand difficult or unknown problems. Elaborate your user stories in readiness for sprint 1. Sprint 0 is about getting prepared.
  • During a release, keep refining the backlog. As you understand more or learn something new, your priorities may change, new requirements may unfold, and what you previously thought was a great feature may get removed entirely.
  • Don’t start work that has no chance of completing within a sprint. If it can’t, break it down into smaller chunks. And don’t start new work when previously started work has not been completed. You create no value by starting lots of things that can’t be considered complete. Further, avoid context-switching. This is the activity of starting one task, getting disturbed, starting another, and at it’s most problematic not completing either.
  • Limit the amount of work that is in progress by the team at any one time. For example, if you have 3 developers and 1 tester, you may put a WIP limit on the developers and on the tester.
  • Never add more work to a sprint after it has been planned. Never stop a sprint part way through. The exceptions to this are:
    • If you performed faster than expected, consider taking the next story from the top of the backlog, as long as it is suitably prepared.
    • If the sprint is performing so badly that it will not complete. This usually only happens where there has been a catastrophe of some description.
    • If the sprint objective can no longer be supported due to immediate changing needs of the business.
  • If you do cancel a sprint, do it gracefully, take time to refocus, and start a new sprint as you would any other.
  • Toward the end of a release, consider a final release sprint. No new features are written, some bugs can be cleaned up, and preparations can be made to actually release a new version of your product to your customers. That’s not to say you can’t make incremental customer releases in the meantime – it’s just that this is a controlled, measured, and sustainable release mechanism.
  • At the end of a release you might consider a one week sprint. In this sprint, you might work with the team to breakdown some new ideas, figure out some new technology. These are great tools that benefit the business and it gives the team some briefing space to think and be creative. It’s not for goofing off, it will still create value. Equally, everyone needs a break. Taking a vacation at this time helps keep your cadence and velocity in good shape when you’re mid-release.

Defining Done

Defining what “done” really means is very important. There are many versions of “done” within the life of your project – what it means to be “done” with a story, release, or whole project. It all boils down to what you, your team, and business will consider as complete to the right level of quality to satisfy readiness to ship.

For your team, the definition of a “done” story will be something like all code complete, peer reviewed, meets the defined acceptance criteria, unit tested, UAT’ed, and pushed to your code repository. To enable the passing of a story from designer to developer to tester, will have definitions of “done” to be accepted by the next person in the chain. Your product owner will have expectations as to what this means to her in order to release the product increment to your customers. In any case, everybody must be aware of what “done” means and be a responsible party in ensuring it’s meaning is met. Define your definition of “done”, communicate it, agree upon it and, evolve it. Done done.

Continuous Measurement

If you can’t measure it, you can’t manage it. The same goes for improvements. The need to gather empirical data in an Agile project is almost as vital as having blood course through your veins! How do you know what needs managing, correcting or improving if there’s no data? Well, simply put, you’ll be relying on gut feel and unsubstantiated guess work. Which falls apart pretty quickly under scrutiny. And depending on who’s doing the scrutinizing, this can be a rather uncomfortable place to be. So from the outset of your project, ensure you know how you are going to demonstrate progress and by what measures others are going to view your success.

Fortunately, Agile comes loaded with useful tools and techniques. The first thing to do is go back to the Agile Manifesto, type the following words into your favourite word processor, blow them up to 96pt, print, and apply to the wall for all to see :

Working software is the primary measure of progress.

Your greatest demonstrable power in delivering software is to show it to people working, doing what it’s supposed to do. Not only will this make your customers happy, it will earn your team great respect and grease the wheels for greater adoption through the business.

Here are some other tools:

  • The daily standup: There are a few variations of this ceremony, but the essence is to have all team members talk to each other face to face, keep it short, keep it focused, and keep it light. If anything needs discussion at great length, park it for a longer conversation between those needed after the standup. If impediments are raised, write them up like a story, add them to your backlog, and get them addressed asap. Anything that is impeding your team slows their progress and will be demonstrable in reduced velocity and software that does not meet expectations.
  • Velocity: Is a historical tool. It’s a little like those financial warnings you get that says past performance is no guarantee of future performance. But in Agile’s case, we do hope to achieve a team velocity that is largely smooth. It’s velocity that allows us to project future performance and confidence in our plans. There may be influences outside your control which might lower the number of story points output for a given sprint. If this happens, you’ll probably be able to recover. Never use velocity as a stick to beat your team with, it will win you no favors. One thing for sure is that velocity will be erratic for the first 2 – 4 sprints. Somewhere in that timeframe thought you should start to see consistency and stability. If your velocity is wavering from one extreme to another, you’ve got a problem which you’ll need to fix with your team.

  • The burndown chart: Now this measure of progress is a thorny one. For that reason, I haven’t given a link to go find out more, you’ll have to do your own research and agree as a team and business which works for you. The reason it’s thorny? Well, not one resource out there tells the same story! One thing agreed is that it will show how, within a sprint or a release, how you’re performing against your timebox. If maintained daily, it will show if you’re on track or deviating. Some teams use it to represent how much value is left to be created, more often than not, others use it to show how much work is left to be completed. The former is a celebration of success and value generation, the latter is less inspiring and motivating.
  • The burnup chart: If you must show work completion rates, use the burndown chart for that. But using the burn-up chart allows you to show how much value has been created and how much more value you’re planning to create by the end of the sprint. A much more motivating tool for a team to both demonstrate to the business what has been done (or little if things aren’t going so well…), and what they still have their sights set on. In any case, use the charts to spot where progress is not tracking as expected, look for patterns or deviations and get on top of them to fix the underlying problem as soon as possible. Update them daily for sprints and once at the end of a sprint for release version charts.
  • Task boards: These are great visual radiators for demonstrating value being created. When updated daily, or at any point in the day, they immediately show progress being made. If combined with Kanban, they’re also great tools for visualising flow and blockages in the system. If you can see that loads of development is completed, but testing is not as productive, you can see the problem happening and respond appropriately and swiftly.

Other tools to consider are Agile Earned Value, cycle time, and Cumulative Flow Diagrams (CFD’s).

Keep these measures, charts, and other tools visible, preferably loud and proud on a wall for all to see. The team, stakeholders, and other interested parties can immediately see the status of a project. It’s transparent and serves as a valuable communication tool. If you can’t put these artifacts on a wall, use tools that are sharable and collaborative and make sure those that want access have it.

Continuous Improvement

Throughout your Agile life, seek to identify and learn where improvements can be made. Lessons are not captured and learned at the end of a project. It’s like passing your driving test and tentatively taking your first drive without an instructor. You’ll know what works and what you’re supposed to do, but over time you’ll tailor your driving skill and capabilities, learning new techniques. You’ll even pick up bad habits. Look for them, understand them and find ways to improve.

There are many opportunities for identifying what does not work and applying remedies. The built-in approach to this in Agile is the retrospective. This is the primary tool for reflection and adjustment. At the end of every sprint, take time with the team to improve how work gets done, how quality is delivered, how efficiency can be maximized, how waste can be minimized and how capacity is increased. When you identify measures for improvement, don’t be tempted to fix all your problems right away. Identify the ones that will have most impact and can be implemented in the next sprint. Measure and observe. If it had the desired impact, lock it up, write it up into your ways of working and definitions of done. If it doesn’t work, think again. Any lessons learned that don’t get put into the upcoming sprint can be parked and prioritized for attention in the next sprint.

Tailor the process. Remove anything that does not work. Remove impediments. Your maturity as an Agile team will know no bounds if you let it.

Beyond Agile Project Management

It’s important to know what happens after the project is delivered. Support & maintenance are key to ensuring that once the project is in customer’s hands it remains performant, customer feedback can be factored into future releases, and customer issues are dealt with appropriately. A project is a unique, time constrained endeavour. The product it delivers has a life after the project team has been disbanded. Ensure you are capable of supporting the product once it is live.

Agile projects co-exist with more traditional approaches. Balancing the requirements for budgetary control and stakeholder visibility with the Agile aims of flexibility and responsiveness.

A governance framework or Agile governance model is used in conjunction with a standard Agile processes, such as Scrum. They work in two specific ways:

  • They provide a wrapper for an Agile project by clarifying the processes that occur outside of development iterations (sprints). This includes providing clear criteria for the successful completion of project initiation and proper validation of the decisions and plan.
  • They shift the emphasis of specific parts of the standard Agile process and highlight particular principles and techniques that need governance or support governance.

In today’s ever-changing world, organizations and businesses are keen to adopt a more flexible approach to delivering projects, and want to become more Agile. However, for organizations delivering projects and programmes, and where existing formal project management processes already exist, the informality of many of the agile approaches is daunting and is sometimes perceived as too risky. These project-focused organizations need a mature agile approach – agility within the concept of project delivery – Agile Project Management.

Learn and grow with your adoption of Agile. Only ever do what your team is comfortable with, ensure their voice is heard, and act on their requests. Encourage your team to adopt new and more valuable techniques when the time is right. Negotiate with the business and encourage them to understand what it means to be an Agile organization.

Enjoy the journey.

This article was written by PAUL BARNES, Toptal’s Head of Projects.

MySQL Master-Slave Replication on the Same Machine


MySQL replication is a process that enables data from one MySQL database server (the master) to be copied automatically to one or more MySQL database servers (the slaves). It is usually used to spread read access on multiple servers for scalability, although it can also be used for other purposes such as for failover, or analyzing data on the slave in order not to overload the master.

As the master-slave replication is a one-way replication (from master to slave), only the master database is used for the write operations, while read operations may be spread on multiple slave databases. What this means is that if master-slave replication is used as the scale-out solution, you need to have at least two data sources defined, one for write operations and the second for read operations.

MySQL master-slave replication

MySQL developers usually work on only one machine and tend to have their whole development environment on that one machine, with the logic that they are not dependent on a network or internet connection. If a master-slave replication is needed because, for example, they need to test replication in a development environment before deploying changes elsewhere, they have to create it on the same machine. While the setup of a single MySQL instance is fairly simple, we need to make some extra effort to setup a second, and then a master-slave replication.

For this step-by-step tutorial, I’ve chosen Ubuntu Linux as the host operating system, and the provided commands are for that operating system. If you want to setup your MySQL master-slave replication on some other operating system, you will need to make modifications for its specific commands. However, general principles of setting up the MySQL master-slave replication on the same machine are the same for all operating systems.

MySQL master-slave replication

Installation of the first MySQL instance

If you already have one instance of MySQL database installed on your machine, you can skip this step.

The easiest way to install MySQL on the Ubuntu is to run the following command from a terminal prompt:

sudo apt-get install mysql-server

During the installation process, you will be prompted to set a password for the MySQL root user.

Setting up mysqld_multi

In order to manage two MySQL instances on the same machine efficiently, we need to use mysqld_multi.

First step in setting up mysqld_multi is the creation of two separate [mysqld] groups in the existing my.cnffile. Default location of my.cnf file on the Ubuntu is /etc/mysql/. So, open my.cnf file with your favorite text editor, and rename existing [mysqld] group to [mysqld1]. This renamed group will be used for the configuration of the first MySQL instance and will be also configured as a master instance. As in MySQL master-slave replication each instance must have its own unique server-id, add the following line in [mysqld1] group:

server-id = 1

Since we need a separate [mysqld] group for the second MySQL instance, copy the [mysqld1] group with all current configurations, and paste it below in the same my.cnf file. Now, rename the copied group to [mysqld2], and make the following changes in the configuration for the slave:

server-id           = 2
port                = 3307
socket              = /var/run/mysqld/mysqld_slave.sock
pid-file            = /var/run/mysqld/mysqld_slave.pid
datadir             = /var/lib/mysql_slave
log_error           = /var/log/mysql_slave/error_slave.log
relay-log           = /var/log/mysql_slave/relay-bin
relay-log-index     = /var/log/mysql_slave/relay-bin.index
master-info-file    = /var/log/mysql_slave/master.info
relay-log-info-file = /var/log/mysql_slave/relay-log.info
read_only           = 1

To setup the second MySQL instance as a slave, set server-id to 2, as it must be different to the master’s server-id.

Since both instances will run on the same machine, set port for the second instance to 3307since it has to be different from the port used for the first instance, which is 3306 by default.

In order to enable this second instance to use the same MySQL binaries, we need to set different values for socket, pid-file, datadir and log_error.

We also need to enable relay-log in order to use the second instance as a slave (parameters relay-log, relay-log-index and relay-log-info-file), as well as to set master-info-file.

Finally, in order to make the slave instance read-only, parameter read_only is set to 1. You should be careful with this option since it doesn’t completely prevent changes on the slave. Even when the read_only is set to 1, updates will be permitted only from users who have the SUPER privilege. MySQL has recently introduced the new parameter super_read_only to prevent SUPER users making changes. This option is available with version 5.7.8.

Apart from the [mysqld1] and [mysqld2] groups, we also need to add a new group [mysqld_multi] to the my.cnf file:

[mysqld_multi]
mysqld     = /usr/bin/mysqld_safe
mysqladmin = /usr/bin/mysqladmin
user       = multi_admin
password   = multipass

Once we install the second MySQL instance, and we start up both, we will give appropriate privileges to the multi_admin user in order to be able to shut down MySQL instances.

Create new folders for the second MySQL instance

In the previous step we prepared the configuration file for the second MySQL instance. In that configuration file two new folders are used. The following Linux commands should be used in order to create those folders with appropriate privileges:

mkdir -p /var/lib/mysql_slave
chmod --reference /var/lib/mysql /var/lib/mysql_slave
chown --reference /var/lib/mysql /var/lib/mysql_slave
 
mkdir -p /var/log/mysql_slave
chmod --reference /var/log/mysql /var/log/mysql_slave
chown --reference /var/log/mysql /var/log/mysql_slave

Additional security settings in AppArmor

In some Linux environments, AppArmor security settings are needed in order to run the second MySQL instance. At least, they are required on Ubuntu.

To properly set-up AppArmor, edit /etc/apparmor.d/usr.sbin.mysqld file with your favorite text editor, add the following lines:

/var/lib/mysql_slave/ r,
/var/lib/mysql_slave/** rwk,
/var/log/mysql_slave/ r,
/var/log/mysql_slave/* rw,
/var/run/mysqld/mysqld_slave.pid rw,
/var/run/mysqld/mysqld_slave.sock w,
/run/mysqld/mysqld_slave.pid rw,
/run/mysqld/mysqld_slave.sock w,

After you save the file, reboot the machine in order for these changes to take effect.

Installation of the second MySQL instance

Several different approaches may be followed for the installation of the second MySQL instance. The approach presented in this tutorial uses the same MySQL binaries as the first, with separate data files necessary for the second installation.

Since we have already prepared the configuration file and the necessary folders and security changes in the previous steps, the final installation step of the second MySQL instance is the initialization of the MySQL data directory.

Execute the following command in order to initialize new MySQL data directory:

mysql_install_db --user=mysql --datadir=/var/lib/mysql_slave

Once MySQL data directory is initialized, you can start both MySQL instances using the mysqld_multi service:

mysqld_multi start

Set the root password for the second MySQL instance by using the mysqladmin with the appropriate host and port. Keep in mind, if host and port are not specified, mysqladmin will connect to the first MySQL instance by the default:

mysqladmin --host=127.0.0.1 --port=3307 -u root password rootpwd

In the example above I set the password to “rootpwd”, but using a more secure password is recommended.

Additional configuration of mysqld_multi

At the end of the “Setting up mysqld_multi” section, I wrote that we will give appropriate privileges to the multi_admin user later on, so now is the time for that. We need to give this user appropriate privileges in both instances, so let’s first connect to the first instance:

mysql --host=127.0.0.1 --port=3306 -uroot -p

Once logged in, execute the following two commands:

mysql> GRANT SHUTDOWN ON *.* TO 'multi_admin'@'localhost' IDENTIFIED BY 'multipass';
mysql> FLUSH PRIVILEGES;

Exit from the MySQL client, and connect to the second instance:

mysql --host=127.0.0.1 --port=3307 -uroot -p

Once logged in, execute the same two commands as above:

mysql> GRANT SHUTDOWN ON *.* TO 'multi_admin'@'localhost' IDENTIFIED BY 'multipass';
mysql> FLUSH PRIVILEGES;

Exit from the MySQL client.

Start both MySQL instances automatically on boot

The final step of setting up mysqld_multi is the installation of the automatic boot script in the init.d.

To do that, create new file named mysqld_multi in /etc/init.d, and give it appropriate privileges:

cd /etc/init.d
touch mysqld_multi
chmod +x /etc/init.d/mysqld_multi

Open this new file with your favorite text editor, and copy the following script:

#!/bin/sh

### BEGIN INIT INFO
# Provides:      	scriptname
# Required-Start:	$remote_fs $syslog
# Required-Stop: 	$remote_fs $syslog
# Default-Start: 	2 3 4 5
# Default-Stop:  	0 1 6
# Short-Description: Start daemon at boot time
# Description:   	Enable service provided by daemon.
### END INIT INFO
 
bindir=/usr/bin
 
if test -x $bindir/mysqld_multi
then
    mysqld_multi="$bindir/mysqld_multi";
else
    echo "Can't execute $bindir/mysqld_multi";
    exit;
fi
 
case "$1" in
    'start' )
   	 "$mysqld_multi" start $2
   	 ;;
    'stop' )
   	 "$mysqld_multi" stop $2
   	 ;;
    'report' )
   	 "$mysqld_multi" report $2
   	 ;;
    'restart' )
   	 "$mysqld_multi" stop $2
   	 "$mysqld_multi" start $2
   	 ;;
    *)
   	 echo "Usage: $0 {start|stop|report|restart}" >&2
   	 ;;
esac

Add mysqld_multi service to the default runlevels with the following command:

update-rc.d mysqld_multi defaults

Reboot your machine, and check that both MySQL instances are running by using the following command:

mysqld_multi report

Setup master-slave replication

Now, when we have two MySQL instances running on the same machine, we will setup the first instance as a master, and the second as a slave.

One part of the configuration was already performed in the chapter “Setting up mysqld_multi”. The only remaining change in the my.cnf file is to set binary logging on the master. To do this, edit my.cnf file with the following changes and additions in the [mysqld1] group:

log_bin                    	= /var/log/mysql/mysql-bin.log
innodb_flush_log_at_trx_commit  = 1
sync_binlog                	= 1
binlog-format              	= ROW

Restart the master MySQL instance in order for these changes to take effect:

mysqld_multi stop 1
mysqld_multi start 1

In order for the slave to connect to the master with the correct replication privileges, a new user should be created on the master. Connect to the master instance using the MySQL client with the appropriate host and port:

mysql -uroot -p --host=127.0.0.1 --port=3306

Create a new user for replication:

mysql> CREATE USER 'replication'@'%' IDENTIFIED BY 'replication';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';

Exit from the MySQL client.

Execute the following command in order to create a dump of the master data:

mysqldump -uroot -p --host=127.0.0.1 --port=3306 --all-databases --master-data=2 > replicationdump.sql

Here we use the option --master-data=2 in order to have a comment containing a CHANGE MASTER statement inside the backup file. That comment indicates the replication coordinates at the time of the backup, and we will need those coordinates later for the update of master information in the slave instance. Here is the example of that comment:

--
-- Position to start replication or point-in-time recovery from
--

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=349;

Import the dump you created in the previous step into the slave instance:

mysql -uroot -p --host=127.0.0.1 --port=3307 < replicationdump.sql

Finally, in order for slave instance to connect to the master instance, the master information on the slave needs to be updated with the appropriate connection parameters.

Connect to the slave instance using the MySQL client with the appropriate host and port:

mysql -uroot -p --host=127.0.0.1 --port=3307

Execute the following command in order to update the master information (take the replication coordinates from the dump file replicationdump.sql, as explained above):

mysql> CHANGE MASTER TO
	-> MASTER_HOST='127.0.0.1',
	-> MASTER_USER='replication',
	-> MASTER_PASSWORD='replication',
	-> MASTER_LOG_FILE='mysql-bin.000001',
	-> MASTER_LOG_POS=349;

Execute the following command in order to start the slave:

mysql> START SLAVE;

Execute the following command in order to verify the replication is up and running:

mysql> SHOW SLAVE STATUS \G

Congratulations. Your MySQL master-slave replication on the same machine is now successfully set up.

MySQL master-slave replication

Wrap Up

Having a master-slave replication configured in your development environment is useful if you need it for a scale-out solution in the production environment. This way, you will also have separate data sources configured for write and read operations so you can test locally that everything works as expected before further deployment.

Additionally, you may want to have several slave instances configured on the same machine to test the load balancer that distributes the read operations to several slaves. In that case, you may use this same manual to setup other slave instances by repeating all the same steps.

This article was written by IVAN BOJOVIC, a Toptal SQL developer.

How To Communicate The Value Of User Research


The beginning of a new project: Your client needs help with a redesign of its website or application.

“We want to improve the user experience, it has to be jaw-dropping for our customers, we want them to fall in love with our product.”

Here is the good news: Your client is aware of User Experience (UX), cares about customers’ needs and sees the value in investing in a great user experience. They asked for an expert with UX skills to help, but do theyreally understand what it means to deliver an exceptional user experience?

User research is a vital component of UX design. Don’t let anyone tell you otherwise.

User research is a vital component of UX design. Don’t let anyone tell you otherwise.

UX is more than a bunch of rules and heuristics that you follow in your product design process. UX is subjective, as the name suggests. It is the (subjective) experience that a user gets while using a product. Therefore, we have to understand the needs and goals of potential users (and those are unique for each product), their tasks, and context.

As a UX expert you should already be familiar with the maxim, It all starts with knowing the user.

Now for some bad news; this is the point when you discover your client’s misconceptions about UX.

UX expert: “Ok, let’s start with your users: Who are they? What do they do? What do they want? What are some of their pain points? I would like to talk to them, observe them, learn from them…”

Client: “Oh, we don’t need user research, that’s a waste of time.”

Wrong!

In this post I will try to explain why, and hopefully, help fellow UX specialists in their efforts to convince clients that good UX is next to impossible if it is not preceded by good user research.

No Need For User Research? There Is Always A Need For User Research

You cannot create a great user experience if you don’t know your users or their needs.

Don’t let anyone tell you differently. Don’t simply accept the common argument that there is no time or money to do any user research for your project.

User research should shape your product design and define guidelines that will enable you to make the right UX decisions.

User research should shape your product design and define guidelines that will enable you to make the right UX decisions.

User research will shape your product; it will define the guidelines for creating a product with a good experience. Not spending any time on research, and basing all of your design decisions on best guesses and assumptions, puts you at risk of not meeting your user needs.

This is how senior UX architect Jim Ross UXmatters sees it:

“Creating something without knowing users and their needs is a huge risk that often leads to a poorly designed solution and, ultimately, results in far higher costs and sometimes negative consequences.”

Lack Of User Research Can Lead To Negative Consequences

Skipping user research will often result in “featurities,” decisions that are driven by technical possibilities and not filtered by user goals.

“My wife would really enjoy this feature! Oh, and I heard from this person that they would like to be able to xyz, so let’s add it in there too.”

This leads to things such as overly complex dashboards in cars, where the user’s focus should be on driving, not on figuring out how to navigate an elaborate infotainment system.

Many users find automotive infotainment systems overly complex and distracting. Identifying the target audience is crucial to good UX design.

Many users find automotive infotainment systems overly complex and distracting. Identifying the target audience is crucial to good UX design.

Tesla’s cutting edge infotainment system, based on Nvidia Tegra hardware, employs two oversized displays, one of which replaces traditional dials, while the other one replaces the center console. Yes, it looks good, but it was designed with tech savvy users in mind. In other words, geeks will love it, but it’s clearly not for everyone. It works for Tesla and its target audience, but don’t expect to see such solutions in low-cost vehicles designed with different people in mind.

Poorly designed remote controls are not intuitive, so casual users tend find them overwhelming, resulting in a frustrating user experience.

Old remote controls are another example of hit and miss UX. There is little in the way of standardization, so each one takes time getting used to.

Old remote controls are another example of hit and miss UX. There is little in the way of standardization, so each one takes time getting used to.

But what about the purely digital user experience? Too many fields in a form, or too much information may overwhelm and drive your users away.

Poorly designed digital interfaces can drive users away. Even if they don’t, they will annoy users and feel like a waste time.

Poorly designed digital interfaces can drive users away. Even if they don’t, they will annoy users and feel like a waste time.

Instead of creating the opposite behaviour, poorly designed and implemented interfaces are more likely to scare off potential users.

Start User Research With Sources For Existing Information

Yes, user research will expand the timeline and it won’t come cheap, but both time and costs can be minimized. You can start with existing, and easy accessible, sources of information about user behaviour to gain a better understanding of user needs. These are:

  • Data Analytics
  • User Reviews and Ratings
  • Customer Support
  • Market Research
  • Usability Testing

Quality user research requires time and resources. However, you can start by using existing information to get a sense of what your users need.

Quality user research requires time and resources. However, you can start by using existing information to get a sense of what your users need.

Let’s take a closer look at each of these sources.

Data Analytics

If you are working with an existing product, your client might have some data and insights about its use. Data analytics assist with getting a good overview about general usage: How many visitors are coming to the website, what pages are most visited, where do visitors come from, when do they leave, how much time do they spend where, and so on.

But here is what this data is not telling you: How does the experience feel? What do users think about your service, and why are they spending time on your website? Why do they leave?

For example, your data indicates that users are spending a lot of time on a specific page. What it doesn’t tell you is why. It might be because the content is so interesting, which means users found what they were looking for. On the other hand, it could be an indication that users are looking for something they cannot find.

Data Analytics is a good starting point, but it needs further qualitative data to support the interpretation of the statistics.

User Reviews And Ratings

Your client’s product might have received some user feedback, already. There might be a section for feedback or ratings on the website itself, but external sources may be available as well. People might have talked about it in blog posts or discussion boards, users may have given app reviews in an app store. Check different sources to see what users are saying.

However, be aware of limitations. People tend to leave reviews and ratings about negative experiences. Don’t take this as a reason to shy away from user reviews or to ignore feedback!

“All these complainers… These aren’t the users we want, anyway!”

Instead, try to look for patterns and repetitive comments. Here are a few tips for making the most from user input:

  • Check whether any action has been taken on negative comments.
  • Compare the timing of negative comments to releases and changelogs. Even great apps can suffer from poor updates, leading to a lot of negative comments in the days following the update.
  • Do your best to weed out baseless comments posted by trolls.
  • What are users saying about the competition? Identify positive and negative differentiators.
  • Don’t place too much trust in “professional and independent” reviews because they can be anything but professional and independent.

User reviews are a good source for collecting information on recurrent problems and frustrations, but they won’t give you an entirely objective view of what users think about your product.

Customer Support

Your client might have a customer support hotline or salespeople who are in touch with the user base. This is a good resource to get a better understanding of what customers are struggling with, what kind of questions they have, what features/functionality they are missing.

Setting up a couple of quick interviews with call center agents, and even shadowing some of their calls, will allow you to collect helpful data without investing too much time or money.

Customer support provides you with a good opportunity to learn about potential areas for improvement, but you will have to dive in deeper to get detailed information about problems.

Market Research

Your client may have some basic information about the customer base, such as accurate demographic information, or a good understanding of different market segments. This information is valuable to understand some of the factors behind the buying decision.

It does not offer any information about the usage of the product, though.

Market research is a good source of information if you need a better understanding of how your client thinks, what their marketing goals are, and what their market looks like. However, it won’t reveal all relevant details about user goals or needs.

Usability Testing

If you are lucky, your client might have done some usability tests and gained insights about what users like or dislike about the product. This data will help you understand how people are using the product and what the current experience looks like.

It is not quantitative research, and therefore you won’t get any numbers and statistics, but it helps you identify major problems, and gives you a better understanding about howyour user group thinks.

There is also the option to do some quick remote testing session by using services such as usertesting.com.

Usability tests are another good way of identifying key problem areas in a product.

How To Educate Your Client About The Value Of User Research

The budget might be small and the timeline tight, but ignoring user research will eventually bite you. Help your clients avoid pitfalls by making them aware of the benefits of user research.

What’s the ROI of good user experience? Knowledgeable UX experts must be able to communicate the value of user research to clients.

What’s the ROI of good user experience? Knowledgeable UX experts must be able to communicate the value of user research to clients.

Here are some common arguments against user research and how to deal with them:

  • We don’t need user research. We trust in your skills as a UX expert

As a UX designer, you need to view user research as part of your toolkit, just like a hammer or saw for a craftsman. It helps you to apply your expertise in practice. No matter how much expertise you have as a designer, there is no generic solution for every problem. The solutions always depend on the user group and the environment, so they need to be defined and understood for every product.

User research will help get an unbiased view, to learn about users’ natural language, their knowledge and mental models, their life context.

You are the UX design expert, but you are not the user.

  • Just use best practices instead of research

Best practices originate from design decisions in a specific context; the digital industry is evolving at a rapid pace, design trends and recommendations change constantly, there is no fixed book of rules. We need to be able to adjust and adapt. Those decision should be made based on research, not practices employed by others, on different projects.

  • We already know everything about our users

Invite your client to a user needs discovery session to observe how users are using the product. Start with small tests and use remote usability testing tools such asusertesting.com to get some quick insights and videos of users in action.

The outcome might be a user journey map or a user task flow. Aim for a visualized document that identifies outstanding questions so you can define areas that need more research.

  • We have personas, we don’t need more research

Personas are a good tool for making your target group more tangible, and for becoming aware of different needs, key task flows and and how that might vary for different groups. It’s the common ground and a good starting point.

However, to redesign a product you need a better understanding of the usage. You need to know how people work with your product, what they do with it, when they get frustrated.

Ask for further details about user stories and task flows to make use of personas.

  • We don’t have the budget for it

The above list of sources for information about user behaviour should give you a good starting point for sharing ideas with your client on how to gain user information on a (very) tight budget.

Make your client aware of the risks if product design decisions are made without a good understanding of the user.

User Research Is The Basis Of Every Good User Experience

User experience is still a bit of a “mystery” in many circles: Everybody talks about it yet it is hard to define, as a good experience is in the eye of every user.

It is, therefore, key to gaining a sound understanding of the context, the user goals, and the thinking necessary for designing a truly exceptional user experience.

The more transparent you are with your work process, the better your client will understand your tools and the information you need to make good decisions.

While some clients may not be open to the idea of using additional resources on research, it’s necessary for experience specialists to explain the value of user research, and to argue for further research when necessary. To accomplish this, UX designers will require negotiating skills to make their case.

Luckily, proper user research is beneficial to clients and UX designers, so convincing clients to divert more resources towards research should be feasible in most situations. Reluctant clients may be swayed if you manage to devise a cost-effective user-research method, and I hope some of the tips and resources in this article will help boost user research, even if money is tight.

The original article was written by  FRAUKE SEEWALD, a Toptal web designer.

Are All Trends Worth It? Top 5 Most Common UX Mistakes That Designers Make


As web designers, we are constantly trying to create a great user experience and help users achieve their goals. In our daily work, we are using all kinds of common patterns and trends. In my experience, I have seen how those patterns and trends can easily steer both clients and designers/developers in the wrong direction. It’s no secret that from time to time we all get sidetracked with things that seem or look cool. I admit, I’ve fallen into those traps on plenty of occasions myself — I’ve opted to create something visually appealing and sacrificed the usability because of it. Why did I do it? I presumed a wow moment will happen and magically engage the user. I hoped this wow effect would make a long lasting impact. The sobering moment came when I found out my users had a hard time understanding the interface I had created and was proud of. Sometimes you learn the hard way.

Top 5 Common UX Design Mistakes

The lesson I learnt was that to avoid a bumpy ride for our users, we must always ask ourselves what is under the shiny surface of the user interface we are creating. It’s worthwhile to stop before embracing any patterns or trends and think about the value they provide. As Kate Rutter brilliantly said, “ugly but useful trumps pretty but pointless”.

Please don’t get me wrong — I am not suggesting we shouldn’t make things pretty — I am suggesting we should aim to make things usable and pretty. The key with patterns and trends is to find a balance between what looks nice and the value behind them.

In this article, I will list several common UX mistakes I see on a daily basis. While they are not all bad per se, they can be dangerous if not implemented with caution. I’ll also share some insight on how you can improve the usability when implementing those trends or even suggest an alternative solution. Without further ado, let us get on with the list.

Common Mistake #1: Large, Fixed Headers

We’re seeing more and more of tall, sticky headers — branding blocks and menus that have a fixed position and take up a significant amount of the viewport. They stay glued to the top and often block the content underneath them. I’ve seen headers on high-production websites that are over 150 pixels in height, but is there real value behind them? I might be pushing it a little bit, but large, fixed headers remind me of the dreaded and now ancient HTML frames. Yikes! Fixed elements can have real benefits, but please be careful when dealing with them — there are a number of important things to take into account. When implementing sticky headers, bear in mind a couple of common mistakes you want to try to avoid:

Too Large for Comfort

If the decision to design a large, fixed header has already been made, do some testing to find out if large is too large. Make sure not to go overboard and stuff the header with too much content, which will result in a super tall element. With the fixed header in place, browsing should still be comfortable and fast for your users. If you are having doubts about the sheer size of the header, try making it smaller without sacrificing too much of the visual appeal and brand presence. Failing to find a good balance could result in a somewhat claustrophobic experience for your users and leaving a small amount of room for the main content.

Last year, I have been working on a project where the client insisted on a sticky navigation bar on the desktop resolution. Even though the bar was not especially tall, I feared some users might get that claustrophobic feeling of being boxed in. My workaround was simple — by giving the nav bar a slight transparency using CSS, the users were able to see through the bar, which made the content area feel bigger. Here is that small piece of CSS code, so why not try and see if that works for you too.

.header { opacity: 0.9; }

Here is an example I found in the wild. Recently, I stumbled upon ATP’s player profile page on Roger Federer.

Its fixed header has a height of about 110px, and when you scroll down the page, a sub navigation appears, making the header 160px high. That is over 30% of the entire page height on my MacBook Pro with the dock open.

Not Fixing The Problem on Mobile

Granted, a lot of users will be using a huge screen, and sticky menus could be a plus on super big resolutions, but what about the smaller resolutions and the mobile world? Bear in mind that a significant portion of your users will be using a small resolution device, so for mobile, position: fixed is probably not the way to go. Thankfully, responsive techniques allows us to design a different solution and stick with the sticky for large resolutions only. The mobile-first approach will provide a lot of the answers — start with the mobile resolution, with only the essential content, and work your way up.

Coffee with a Cop also has a fixed header, but much smaller — less than 80 pixels.

This is arguably a good solution on large resolutions, as it enables quick and easy navigation. On small resolutions, the header is also fixed and takes up a considerable amount of the viewport. I would advise against a sticky header on mobile and suggest a sticky hamburger icon, which would open up a menu when tapped on. Although this pattern is not a universal problem solver, it does free up a significant amount of space. On smartphones and tablets, space can be precious.

Common Mistake #2: Thin Fonts

Thin fonts seem to be everywhere — numerous native mobile apps and modern websites. With screen technology advancing and rendering improving, a lot of designers are opting for thin (or light) fonts in their designs. They are elegant, fresh, and fashionable. However, thin type can cause usability problems. One of the main goals of any text is to be legible, and thin type can affect legibility in a major way. Keep in mind that not everyone will be using your website on a display that will render the thin type well. For instance, I have found thin type extremely difficult to read on my iPhone and iPad with Retina display. Before thinking about the look and feel of font, let’s step back for just a second.

If users can’t read the words in your app, it doesn’t matter how beautiful the typography is

From the Apple Human Interface Guidelines:

Above all, text must be legible. If users can’t read the words in your app, it doesn’t matter how beautiful the typography is.

Apple is referring to mobile apps, but the exact same principle applies to all websites. As Colm Roche said,legibility ≠ optional, but mandatory for good usability. There is no point putting content on a website if most of your users can barely read it, is there?

Here are some of the common mistakes you might want to bear in mind before putting your type on a diet:

Using Thin for Thin’s Sake

As with any trend, it is dangerous to use it simply because others are using it. Fonts should not only look good. First and foremost, they should be legible and provide a stepping stone to good usability. The decision to use thin type simply because of the fact that it looks good is bound to backfire. In his excellent talk More Perfect Typography, Tim Brown talks about a sweet spot at which a typeface sings. This sweet spot would be a combination of size, weight and color where you set the foundation of your website.

To make sure you have found a good body font and hit that sweet spot, do some testing in various environments. Which leads us to the next mistake worth avoiding:

Not Testing the Legibility on All Major Devices

Thin type may look good on your display and you may not have a hard time reading it, but be aware of the fact that you are not your user. Invest in usability testing to find out if your real users are happy with the typography on all major devices: desktop computers, laptops, tablets and smartphones. While doing mobile testing, have your participants use your website on mobile devices in daylight — your real users will not always have perfect browsing conditions. If you had to read something on a mobile device on a sunny day, you probably know how difficult it can be. If you decide to use a thin font on your website, there’s a simple way to adapt to mobile users. I will show a solution on a website I saw recently:

Oak does a fine job of adapting to the needs of the users — on the desktop resolution, their H1 heading has a very thin font weight. Since the heading is large and had a good color contrast, I suspect legibility does not suffer. On mobile, where the size of the heading is significantly smaller, the weight is slightly thicker, which surely aids legibility. Clearly, they have spotted legibility issues with thin fonts on small sizes and implemented a greater font weight through media queries. Their solution is simple, but very effective.

Common Mistake #3: Low Contrast

Low color contrast has become somewhat of a trend in user interface design in recent years. We have already covered thin fonts, which create a low type contrast, but there is much a bigger trap you can fall in — a combination of thin type with low color contrast will make yours truly really scratch his head and think have we lost our minds? Of course, not all low contrast is bad. It can even add to the visual appeal if designed with care. But as is the case with all UX mistakes, it is important not to go over the top and keep usability in mind.

A couple of major mistakes you might want to avoid while dealing with contrast are:

Low Color Contrast in Body Copy

While low color contrast is not exclusively bad, it can have quite a bad impact on the usability of your website and make text very hard to read for some of your users. If this article inspires you to increase color contrast on one thing only, make that your main body copy. It is most probably the least favorable area to experiment.

Cool Springs Financial uses a thin variant of Helvetica for body text. While it looks elegant and contributes to an aesthetically pleasing user interface, it is difficult to read on a number of platforms.

I did a quick test on a MacBook Pro with retina display, as well as an iPhone and an iPad with retina display. The screenshot is from my MacBook Pro which reveals contrast and legibility problems. I had a hard time reading the text on the website on all of my devices.

Not Testing the Contrast

Consider doing some user testing to avoid issues down the road. I already hear some of my clients and colleagues go “Bojan, user testing is time-consuming and expensive”. It can be, but it really does not take an awful lot to test the contrast on your website. Start with body copy and work your way up. There is a nifty tool called Colorable which will help you set a correct text contrast according to the WCAG accessibility guidelines. Once you know you are using correct text contrast, adjust other colors on your website and do quick user tests to make sure most of your users have a pain-free time. I doubt low contrast will cause a rebellion, but it could frustrate a lot of your users.

Common Mistake #4: Scroll Hijacking

Another trend we’re seeing a lot of is the scroll hijack. Websites that implement this trend take control of the scroll (usually with JavaScript) and override a basic function of the web browser. The user no longer has full control of the page scroll and is unable to predict its behavior, which can easily lead to confusion and frustration. It’s a risky experiment that could hurt the usability, so I advise great caution.

If you’re set on implementing a scroll hijack, you might want to try and avoid the following mistakes:

Using the Hijack Just Because It’s Trendy

Some websites can get away with scroll hijacking, but that is no guarantee your website can. Trends and patterns can’t be blindly followed and implemented. For example, we’re seeing many designers follow Apple’s presentational pages with scroll hijacking, parallax effects and high resolution images of various devices. Apple has their own reasons, their own users, a unique concept, and unique content. Since every website has unique problems, it also must have solutions tailored for those problems.

Not Testing on Actual Users

To avoid issues when borrowing ideas or patterns, make sure to test the prototype of your website on users. Simple usability testing will reveal whether the implementation of the scroll hijack is feasible or not. Testing will surely answer a lot of questions and provide clues on how you can improve upon your idea. Without testing, you have no way of knowing which way to go and developing websites based on assumptions are often costly in the long run.

Tumblr uses scroll hijacking on their current homepage. While their implementation could also be a risky one, it seems they understand their target audience, the content and concept supports it, and that they have addressed many of the user needs. When the user attempts to scroll down the page, the scroll is hijacked by the website, but the user is then quickly taken to the next section of the homepage. The content is broken into several sections or blocks, which are clear to distinguish and big indicator dots are fixed on the left side of the viewport. As a result, the homepage feels like a huge carousel you have control over, rather than an experimental website with a mind of its own.

Common Mistake #5: Ineffective Carousels

Carousels are very common on the web, and have been for a long time. While they can be effective, they can also turn into a nightmare if not designed and developed carefully. The nightmare for your users could be the fact that they are having difficulties understanding it. The nightmare for you could be the fact that your users are not seeing the important content in some of the slides of the carousel.

Carousels with no content of importance: not the most effective use of space on your website

Carousels with no content of importance: not the most effective use of space on your website

A lot of carousels I’m seeing have similar downsides. Some of those are:

Lack of Real Value for the Users

What is the real value the carousel offers to your users? If done right, a carousel should engage your users and help them achieve their goals quickly and pain-free. I often see carousels that do not provide additional value, but appear to be mere decoration. Here is a quick test you can do: take a post-it and write down three benefits of the carousel for the user. If you fail to think of three, there is a good chance your carousel needs more work.

Too many slides could have a negative effect on the users and they might simply choose to ignore the carousel. Usability guru Jakob Nielsen suggests the following:

Include five or fewer frames within the carousel, as it’s unlikely users will engage with more than that. While more than five could be too many, less than three could indicate a better solution is well-advised. One of the premises for a carousel is the fact that you need a lot of content fitted into a small amount of space, but with just two slides, why not show both slides at a time and forget about sliding altogether?

The previous and next Arrows and Slide Indicators Are Not Obvious and/or Accessible

Make sure you are making your precious content accessible. Important information in a carousel could remain hidden if the next and previous arrows are not obvious and large enough for a comfortable click. Oh, and do not forget the tap — your mobile users will thank you.

Sometimes, there are no arrows in a carousel, and the indicator dots are links for jumping between slides. Remember that you need to provide a nice, big clickable and tappable area (I recommend 35x35px at the very least). Otherwise, the small targets might lead to a very frustrating target practice and an exit from your website.

The Floresta Longo Foundation website has a carousel of images in the header. It has an autoplay and slides through five photographs. The previous and next arrows are small and are transparent which makes them both hard to spot and hard to click or tap. There are no indicators of the slide you are on and no labels showing what the photograph represents. The images are not links and act as a decoration. While the carousel may hold some value of engaging the user, it certainly leaves a lot to be desired.

Conclusion

I have listed several common UX mistakes in some of the current web trends. If you have implemented or are thinking about implementing any of them, I sincerely hope this article has provided something you will find useful. As a UX designer, use your best judgement and do not be afraid to improvise, but always remember to keep your users in mind.

Feel free to start a conversation below. I would love to read about your experiences, points of view, or suggestions on how to make things better.

The original article is from Toptal. Find more UX resources here.