Writing Tests That Matter: Tackle The Most Complex Code First


There are a lot of discussions, articles, and blogs around the topic of code quality. People say – use Test Driven techniques! Tests are a “must have” to start any refactoring! That’s all cool, but it’s 2016 and there is a massive volume of products and code bases still in production that were created ten, fifteen, or even twenty years ago. It’s no secret that a lot of them have legacy code with low test coverage.

While I’d like to be always at the leading, or even bleeding, edge of the technology world – engaged with new cool projects and technologies – unfortunately it’s not always possible and often I have to deal with old systems. I like to say that when you develop from scratch, you act as a creator, mastering new matter. But when you’re working on legacy code, you’re more like a surgeon – you know how the system works in general, but you never know for sure whether the patient will survive your “operation”. And since it’s legacy code, there are not many up to date tests for you to rely on. This means that very frequently one of the very first steps is to cover it with tests. More precisely, not merely to provide coverage, but to develop a test coverage strategy.

Coupling and Cyclomatic Complexity: Metrics for Smarter Test Coverage

Forget 100% coverage. Test smarter by identifying classes that are more likely to break.

Basically, what I needed to determine was what parts (classes / packages) of the system we needed to cover with tests in the first place, where we needed unit tests, where integration tests would be more helpful etc. There are admittedly many ways to approach this type of analysis and the one that I’ve used may not be the best, but it’s kind of an automatic approach. Once my approach is implemented, it takes minimal time to actually do the analysis itself and, what is more important, it brings some fun into legacy code analysis.

The main idea here is to analyse two metrics – coupling (i.e., afferent coupling, or CA) and complexity (i.e. cyclomatic complexity).

The first one measures how many classes use our class, so it basically tells us how close a particular class is to the heart of the system; the more classes there are that use our class, the more important it is to cover it with tests.

On the other hand, if a class is very simple (e.g. contains only constants), then even if it’s used by many other parts of the system, it’s not nearly as important to create a test for. Here is where the second metric can help. If a class contains a lot of logic, the Cyclomatic complexity will be high.

The same logic can also be applied in reverse; i.e., even if a class is not used by many classes and represents just one particular use case, it still makes sense to cover it with tests if its internal logic is complex.

There is one caveat though: let’s say we have two classes – one with the CA 100 and complexity 2 and the other one with the CA 60 and complexity 20. Even though the sum of the metrics is higher for the first one we should definitely cover the second one first. This is because the first class is being used by a lot of other classes, but is not very complex. On the other hand, the second class is also being used by a lot of other classes but is relatively more complex than the first class.

To summarize: we need to identify classes with high CA and Cyclomatic complexity. In mathematical terms, a fitness function is needed that can be used as a rating – f(CA,Complexity) – whose values increase along with CA and Complexity.

Generally speaking, the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.

Finding tools to calculate CA and Complexity for the whole code base, and provide a simple way to extract this information in CSV format, proved to be a challenge. During my search, I came across two tools that are free so it would be unfair not to mention them:

A Bit Of Math

The main problem here is that we have two criteria – CA and Cyclomatic complexity – so we need to combine them and convert into one scalar value. If we had a slightly different task – e.g., to find a class with the worst combination of our criteria – we would have a classical multi-objective optimization problem:

We would need to find a point on the so called Pareto front (red in the picture above). What is interesting about the Pareto set is that every point in the set is a solution to the optimization task. Whenever we move along the red line we need to make a compromise between our criteria – if one gets better the other one gets worse. This is called Scalarization and the final result depends on how we do it.

There are a lot of techniques that we can use here. Each has its own pros and cons. However, the most popular ones are linear scalarizing and the one based on an reference point. Linear is the easiest one. Our fitness function will look like a linear combination of CA and Complexity:

f(CA, Complexity) = A×CA + B×Complexity

where A and B are some coefficients.

The point which represents a solution to our optimization problem will lie on the line (blue in the picture below). More precisely, it will be at the intersection of the blue line and red Pareto front. Our original problem is not exactly an optimization problem. Rather, we need to create a ranking function. Let’s consider two values of our ranking function, basically two values in our Rank column:

R1 = A∗CA + B∗Complexity and R2 = A∗CA + B∗Complexity

Both of the formulas written above are equations of lines, moreover these lines are parallel. Taking more rank values into consideration we’ll get more lines and therefore more points where the Pareto line intersects with the (dotted) blue lines. These points will be classes corresponding to a particular rank value.

Unfortunately, there is an issue with this approach. For any line (Rank value), we’ll have points with very small CA and very big Complexity (and visa versa) lying on it. This immediately puts points with a big difference between metric values in the top of the list which is exactly what we wanted to avoid.

The other way to do the scalarizing is based on the reference point. Reference point is a point with the maximum values of both criteria:

(max(CA), max(Complexity))

The fitness function will be the distance between the Reference point and the data points:

f(CA,Complexity) = √((CA−CA )2 + (Complexity−Complexity)2)

We can think about this fitness function as a circle with the center at the reference point. The radius in this case is the value of the Rank. The solution to the optimization problem will be the point where the circle touches the Pareto front. The solution to the original problem will be sets of points corresponding to the different circle radii as shown in the following picture (parts of circles for different ranks are shown as dotted blue curves):

This approach deals better with extreme values but there are still two issues: First – I’d like to have more points near the reference points to better overcome the problem that we’ve faced with linear combination. Second – CA and Cyclomatic complexity are inherently different and have different values set, so we need to normalize them (e.g. so that all the values of both metrics would be from 1 to 100).

Here is a small trick that we can apply to solve the first issue – instead of looking at the CA and Cyclomatic Complexity, we can look at their inverted values. The reference point in this case will be (0,0). To solve the second issue, we can just normalize metrics using minimum value. Here is how it looks:

Inverted and normalized complexity – NormComplexity:

(1 + min(Complexity)) / (1 + Complexity)∗100

Inverted and normalized CA – NormCA:

(1 + min(CA)) / (1+CA)∗100

Note: I added 1 to make sure that there is no division by 0.

The following picture shows a plot with the inverted values:

Final Ranking

We are now coming to the last step – calculating the rank. As mentioned, I’m using the reference point method, so the only thing that we need to do is to calculate the length of the vector, normalize it, and make it ascend with the importance of a unit test creation for a class. Here is the final formula:

Rank(NormComplexity , NormCA) = 100 − √(NormComplexity2 + NormCA2) / √2

More Statistics

There is one more thought that I’d like to add, but let’s first have a look at some statistics. Here is a histogram of the Coupling metrics:

What is interesting about this picture is the number of classes with low CA (0-2). Classes with CA 0 are either not used at all or are top level services. These represent API endpoints, so it’s fine that we have a lot of them. But classes with CA 1 are the ones that are directly used by the endpoints and we have more of these classes than endpoints. What does this mean from architecture / design perspective?

In general, it means that we have a kind of script oriented approach – we script every business case separately (we can’t really reuse the code as business cases are too diverse). If that is the case, then it’s definitely a code smell and we need to do refactoring. Otherwise, it means the cohesion of our system is low, in which case we also need refactoring, but architectural refactoring this time.

Additional useful information we can get from the histogram above is that we can completely filter out classes with low coupling (CA in {0,1}) from the list of the classes eligible for coverage with unit tests. The same classes, though, are good candidates for the integration / functional tests.

You can find all the scripts and resources that I have used in this GitHub repository: ashalitkin/code-base-stats.

Does It Always Work?

Not necessarily. First of all it’s all about static analysis, not runtime. If a class is linked from many other classes it can be a sign that it’s heavily used, but it’s not always true. For example, we don’t know whether the functionality is really heavily used by end users. Second, if the design and the quality of the system is good enough, then most likely different parts / layers of it are decoupled via interfaces so static analysis of the CA will not give us a true picture. I guess it’s one of the main reasons why CA is not that popular in tools like Sonar. Fortunately, it’s totally fine for us since, if you remember, we are interested in applying this specifically to old ugly code bases.

In general, I’d say that runtime analysis would give much better results, but unfortunately it’s much more costly, time consuming, and complex, so our approach is a potentially useful and lower cost alternative.

This article was written by Andrey Shalitkin, a Toptal Java developer.

A New Way of Using Email for Support Apps: An AWS Tutorial


Email may not be as cool as other communication platforms but working with it can still be fun. I was recently tasked with implementing messaging in a mobile app. The only catch was that the actual communication needed to be over email. We wanted app users to be able to communicate with a support team just like you would send a text message. Support team members needed to receive these messages via email, and also needed to be able to respond to the originating user. To the end user, everything needed to look and function just like any other modern messaging app.

In this article, we will take a look at how to implement a service similar to the one described above using Java and a handful of Amazon’s web services. You will need a valid AWS account, a domain name, and access to your favorite Java IDE.

The Infrastructure

Before we write any code, we’re going to set up the required AWS services for routing and consuming email. We’re going to use SES for sending and consuming emails and SNS+SQS for routing incoming messages.

Consuming Email Programmatically Using AWS

Revitalize e-mail in support applications with Amazon SES.

It all starts here with SES. Start by logging into your AWS account and navigating to the SES console.

Before we begin, you’re going to need a verified domain name you can send emails from.

This will be the domain app users will be sending email messages from and support members will be replying to. Verifying a domain with SES is a straightforward process, and more info can be found here.

If this is the first time you are using SES, or you have not requested a sending limit, your account will be sandboxed. This means that you will not be able to send email to addresses that aren’t verified with AWS. This may cause an error later in the tutorial, when we send an email to our fictional help desk. To avoid this, you can verify whatever email address you plan on using as your help desk in the SES console in the Email Addresses tab.

Once you have a verified domain, we can create a rule set. Navigate to the Rule Sets tab in the SES console and create a new Receipt Rule.

The first step when creating a receipt rule will be defining a recipient.

Recipients filters will allow you to define what emails SES will consume, and how to process each incoming message. The recipient we define here needs to match the domain and address pattern app user messages are emailed from. The simplest case here would be to add a recipient for the domain we previously verified, in our case example.com. This will configure SES to apply our rule to all emails sent to example.com. (e.g. foo@example.com, bar@example.com).

To create a rule for our entire domain, we would add a recipient for example.com.

It’s also possible to match address patterns. This is useful if you want to route incoming messages to different SQS queues.

Say that we have queue A and queue B. We could add two recipients: a@example.com and b@example.com. If we want to insert a message into queue A, we would email a+foo@example.com. The a part of this will match our a@example.com recipient. Everything between the + and @ is arbitrary user data, it will not affect SES’s address matching. To insert into queue B, simply replace a with b.

After you define your recipients, the next step is to configure the action SES will perform after consuming a new email. We eventually want these to end up in SQS, however it is currently not possible to go directly from SES to SQS. To bridge the gap, we need to use SNS. Select the SNS action and create a new topic. We will eventually configure this topic to insert messages into SQS.

Select create SNS topic and give it a name.

After the topic is created, we need to select a message encoding. I’m going to use Base64 in order to preserve special characters. The encoding you choose will affect how messages are decoded when we consume them in our service.

Once the rule is set, we just need to name it.

The next step will be configuring SQS and SNS, for that we need to head over to the SQS console and create a new queue.

To keep things simple, I’m using the same name as our SNS topic.

After we define our queue, we’re going to need to adjust its access policy. We only want to grant our SNS topic permission to insert. We can achieve this by adding a condition that matches our SNS topic arn.

The value field should be populated with the ARN for the SNS topic SES is notifying.

After SQS is set up, it’s time for one more trip back to the SNS console to configure your topic to insert notifications into your shiny new SQS queue.

In the SNS console, select the topic SES is notifying. From there, create a new subscription. The subscription protocol should be Amazon SQS, and the destination should be the ARN of the SQS queue you just generated.

After all that, the AWS side of the equation should be all set up. We can test our work by emailing ourselves. Send an email to the domain configured with SES, then head to the SQS console and select your queue. You should be able to see the payload containing your email.

Java Service to Deal with Emails

Now on to the fun part! In this section, we’re going to create a simple microservice capable of sending messages and processing incoming emails. The first step will be defining an API that will email our support desk on behalf of a user.

A quick note. We’re going to focus on the business logic components of this service, and won’t be defining REST endpoints or a persistence layer.

To build a Spring service, we’re going to use Spring Boot and Maven. We can use Spring Initializer to generate a project for us, start.spring.io.

To start, our pom.xml should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.toptal.tutorials</groupId>
	<artifactId>email-processor</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>jar</packaging>

	<name>email-processor</name>
	<description>A simple "micro-service" for emailing support on behalf of a user and processing replies</description>

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.3.5.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

	
	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

Emailing Support on Behalf of a User

First, let’s define a bean for emailing our support desk on behalf of a user. The job of this bean will be to process an incoming message from a user ID, and email that message to our pre-defined support desk email address.

Let’s start by defining an interface.

public interface SupportBean {

    /**
     * Send a message to the application support desk on behalf of a user
     * @param fromUserId The ID of the originating user
     * @param message The message to send
     */
    void messageSupport(long fromUserId, String message);
}

And an empty implementation:

@Component
public class SupportBeanSesImpl implements SupportBean {

    /**
     * Email address for our application help-desk
     * This is the destination address user support emails will be sent to
     */
    private static final String SUPPORT_EMAIL_ADDRESS = "support@example.com";

    @Override
    public void messageSupport(long fromUserId, String message) {
        //todo: send an email to our support address
    }
}

Let’s also add the AWS SDK to our pom, we’re going to use the SES client to send our emails:

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-java-sdk</artifactId>
	<version>1.11.5</version>
</dependency>

The first thing we need to do is generate an email address to send our user’s message from. The address we generate will play a critical role on the consuming side of our service. It needs to contain enough information to route the help desk’s reply back to the originating user.

To achieve this, we’re going to include the originating user ID in our generated email address. To keep things clean, we’re going to create an object containing the user ID and use the Base64 encoded JSON string of it as the email address.

Let’s create a new bean responsible for turning a user ID into an email address.

public interface UserEmailBean {

    /**
     * Returns a unique per user email address
     * @param userID Input user ID
     * @return An email address unique for the input userID
     */
    String emailAddressForUserID(long userID);
}

Let’s start our implementation by adding the required consents and a simple inner class that will help us serialize our JSON.

@Component
public class UserEmailBeanJSONImpl implements UserEmailBean {

    /**
     * The TLD for all our generated email addresses
     */
    private static final String EMAIL_DOMAIN = "example.com";

    /**
     * com.fasterxml.jackson.databind.ObjectMapper used to create a JSON object including our user ID
     */
    private final ObjectMapper objectMapper = new ObjectMapper();

    @Override
    public String emailAddressForUserID(long userID) {
//todo: create the email address
        return null;
    }

    /**
     * Simple helper class we will serialize.
     * The JSON representation of this class will become our user email address
     */
    private static class UserDetails{
        private Long userID;

        public Long getUserID() {
            return userID;
        }

        public void setUserID(Long userID) {
            this.userID = userID;
        }
    }
}

Generating our email address is straightforward, all we need to do is create a UserDetails object and Base64 encode the JSON representation. The finished version of our createAddressForUserID method should look something like this:

   @Override
    public String emailAddressForUserID(long userID) {
        UserDetails userDetails = new UserDetails();
        userDetails.setUserID(userID);
        //create a JSON representation.
        String jsonString = objectMapper.writeValueAsString(userDetails);
        //Base64 encode it
        String base64String = Base64.getEncoder().encodeToString(jsonString.getBytes());
        //create an email address out of it
        String emailAddress = base64String + "@" + EMAIL_DOMAIN;
        return emailAddress;
    }

Now we can head back to SupportBeanSesImpl and update it to use the new email bean we just created.

private final UserEmailBean userEmailBean;

@Autowired
public SupportBeanSesImpl(UserEmailBean userEmailBean) {
        this.userEmailBean = userEmailBean;
}


@Override
public void messageSupport(long fromUserId, String message) throws JsonProcessingException {
        //user specific email
        String fromEmail = userEmailBean.emailAddressForUserID(fromUserId);
}

To send emails, we’re going to use the AWS SES client included with the AWS SDK.

  /**
     * SES client
     */
    private final AmazonSimpleEmailService amazonSimpleEmailService = new AmazonSimpleEmailServiceClient(
            new DefaultAWSCredentialsProviderChain() //see http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html
    );

We’re utilizing the DefaultAWSCredentialsProviderChain to manage credentials for us, this class will search for AWS credentials as defined here.

We’re going to an AWS access key provisioned with access to SES and eventually SQS. For more info check out the documentation from Amazon.

The next step will be updating our messageSupport method to email support using the AWS SDK. The SES SDK makes this a straightforward process. The finished method should look something like this:

@Override
public void messageSupport(long fromUserId, String message) throws JsonProcessingException {
        //User specific email
        String fromEmail = userEmailBean.emailAddressForUserID(fromUserId);

        //create the email 
        Message supportMessage = new Message(
                new Content("New support request from userID " + fromUserId), //Email subject
                new Body().withText(new Content(message)) //Email body, this contains the user’s message
        );
        
        //create the send request
        SendEmailRequest supportEmailRequest = new SendEmailRequest(
                fromEmail, //From address, our user's generated email
                new Destination(Collections.singletonList(SUPPORT_EMAIL_ADDRESS)), //to address, our support email address
                supportMessage //Email body defined above
        );
        
        //Send it off
        amazonSimpleEmailService.sendEmail(supportEmailRequest);
    }

To try it out, create a test class and inject the SupportBean. Make sure SUPPORT_EMAIL_ADDRESS defined in SupportBeanSesImpl points to an email address you own. If your SES account is sandboxed, this address also needs to be verified. Email addresses can be verified in the SES console under Email Addresses section.

@Test
public void emailSupport() throws JsonProcessingException {
	supportBean.messageSupport(1, "Hello World!");
}

After running this, you should see a message show up in your inbox. Better yet, reply to the message and check the SQS queue we set up earlier. You should see a payload containing your reply.

Consuming Replies from SQS

The last step will be to read in emails from SQS, parse out the email message, and figure out what user ID the reply should be forwarded belongs to.

Message queueing services like Amazon SQS play a vital role in service-oriented architecture by allowing services to communicate with each other without having to compromise speed, reliability or scalability.

To listen for new SQS messages, we’re going to use the Spring Cloud AWS messaging SDK. This will allow us to configure a SQS message listener via annotations, and thus avoid quite a bit of boilerplate code.

First, the required dependencies.

Add the Spring Cloud messaging dependency:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-aws-messaging</artifactId>
</dependency>

And add Spring Cloud AWS to your pom dependency management:

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-aws</artifactId>
			<version>1.1.0.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Currently, Spring Cloud AWS doesn’t support annotation driven configuration, so we’re going to have to define an XML bean. Luckily we don’t need much configuration at all, so our bean definition will be pretty light. The main point of this file will be to enable annotation driven queue listeners, this will allow us to annotate a method as an SqsListener.

Create a new XML file named aws-config.xml in your resources folder. Our definition should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aws-context="http://www.springframework.org/schema/cloud/aws/context"
       xmlns:aws-messaging="http://www.springframework.org/schema/cloud/aws/messaging"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/cloud/aws/context
       http://www.springframework.org/schema/cloud/aws/context/spring-cloud-aws-context.xsd
       http://www.springframework.org/schema/cloud/aws/messaging
     http://www.springframework.org/schema/cloud/aws/messaging/spring-cloud-aws-messaging.xsd">

    <!--enable annotation driven queue listeners -->
    <aws-messaging:annotation-driven-queue-listener />
    <!--define our region, this lets us reference queues by name instead of by URL. -->
    <aws-context:context-region region="us-east-1" />

</beans>

The important part of this file is <aws-messaging:annotation-driven-queue-listener />. We are also defining a default region. This is not necessary, but doing so will allow us to reference our SQS queue by name instead of URL. We are not defining any AWS credentials, by omitting them Spring will default to DefaultAWSCredentialsProviderChain, the same provider we used earlier in our SES bean. More info can be found in the Spring Cloud AWS docs.

To use this XML config in our Spring Boot app, we need to explicitly import it. Head over to your @SpringBootApplication class and import it.

@SpringBootApplication
@ImportResource("classpath:aws-config.xml") //Explicit import for our AWS XML bean definition
public class EmailProcessorApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmailProcessorApplication.class, args);
	}
}

Now let’s define a bean that will handle incoming SQS messages. Spring Cloud AWS lets us accomplish this with a single annotation!

/**
 * Bean reasonable for polling SQS and processing new emails
 */
@Component
public class EmailSqsListener {

    @SuppressWarnings("unused") //IntelliJ isn't quite smart enough to recognize methods marked with @SqsListener yet
    @SqsListener(value = "com-example-ses", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)   //Mark this method as a SQS listener
                                                                                                    //Since we already set up our region we can use the logical queue name here
                                                                                                    //Spring will automatically delete messages if this method executes successfully
    public void consumeSqsMessage(@Headers Map<String, String> headers, //Map of headers returned when requesting a message from SQS
                                                                        //This map will include things like the relieved time, count and message ID
                                  @NotificationMessage String rawJsonMessage   //JSON string representation of our payload
                                                                            //Spring Cloud AWS supports marshaling here as well
                                                                            //For the sake of simplicity we will work with incoming messages as a JSON object
    ) throws Exception{

        //com.amazonaws.util.json.JSONObject included with the AWS SDK
        JSONObject jsonSqsMessage = new JSONObject(rawJsonMessage);

    }
}

The magic here lies with the @SqsListener annotation. With this, Spring will set up an Executor and start polling SQS for us. Every time a new message is found, our annotated method will be invoked with the message contents. Optionally, Spring Cloud can be configured to marshall incoming messages, giving you the ability to work with strong typed objects inside your queue listener. Additionally, you have the ability to inject a single header or a map of all headers returned from the underlying AWS call.

We’re able to use the logical queue name here since we previously defined the region in aws-config.xml, if we wanted to omit that we would be able to replace the value with our fully qualified SQS URL. We’re also defining a deletion policy, this will configure Spring to delete the incoming message from SQS if its condition is met. There are multiple policies defined in SqsMessageDeletionPolicy, we’re configuring Spring to delete our message if our consumeSqsMessage method executes successfully.

We’re also injecting the returned SQS headers into our method using @Headers, and the injected map will contain metadata related to the queue and payload received. The message body is injected using @NotificationMessage. Spring supports marshalling utilizing Jackson, or via a custom message body converter. For the sake of convenience, we’re just going to inject the raw JSON string and work with it using the JSONObject class included with the AWS SDK.

The payload retrieved from SQS will contain a lot of data. Take a look at the JSONObject to familiarize yourself with the payload returned. Our payload contains data from every AWS service it was passed through, SES, SNS, and finally SQS. For the sake of this tutorial, we really only care about two things: the list of email addresses this was sent to and the email body. Let’s start by parsing out the emails.

//Pull out the array containing all email addresses this was sent to
JSONArray emailAddressArray = jsonSqsMessage.getJSONObject("mail").getJSONArray("destination");
for(int i = 0 ; i < emailAddressArray.length() ; i++){
	String emailAddress = emailAddressArray.getString(i);
}

Since in the real world, our helpdesk may include more than just the original sender in his or her reply, we’re going to want to verify the address before we parse out the user ID. This will give our support desk both the ability to message multiple users at the same time as well as the ability to include non app users .

Let’s head back over to our UserEmailBean interface and add another method.

/**
 * Returns true if the input email address matches our template
 * @param emailAddress Email to check
 * @return true if it matches
 */
boolean emailMatchesUserFormat(String emailAddress);

In UserEmailBeanJSONImpl, to implement this method we’re going to want to do two things. First, check if the address ends with our EMAIL_DOMAIN, then check if we can marshall it.

   @Override
    public boolean emailMatchesUserFormat(String emailAddress) {

        //not our address, return right away
        if(!emailAddress.endsWith("@" + EMAIL_DOMAIN)){
            return false;
        }
        //We just care about the email part, not the domain part
        String emailPart = splitEmail(emailAddress);
        try {
            //Attempt to decode our email
            UserDetails userDetails = objectMapper.readValue(Base64.getDecoder().decode(emailPart), UserDetails.class);
            //We assume this email matches if the address is successfully decoded and marshaled  
            return userDetails != null && userDetails.getUserID() != null;
        } catch (IllegalArgumentException | IOException e) {
            //The Base64 decoder will throw an IllegalArgumentException it the input string is not Base64 formatted
            //Jackson will throw an IOException if it can't read the string into the UserDetails class
            return false;
        }
    }
    /**
     * Splits an email address on @
     * Returns everything before the @
     * @param emailAddress Address to split
     * @return all parts before @. If no @ is found, the entire address will be returned
     */
    private static String splitEmail(String emailAddress){
        if(!emailAddress.contains("@")){
            return emailAddress;
        }
        return emailAddress.substring(0, emailAddress.indexOf("@"));
    }

We defined two new methods, emailMatchesUserFormat which we just added to our interface, and a simple utility method for splitting an email address on the @. Our emailMatchesUserFormat implementation works by attempting to Base64 decode and marshall the address part back into our UserDetails helper class. If this succeeds, we then check to make sure the required userID is populated. If all this works out, we can safely assume a match.

Head back to our EmailSqsListener and inject the freshly updated UserEmailBean.

   private final UserEmailBean userEmailBean;

    @Autowired
    public EmailSqsListener(UserEmailBean userEmailBean) {
        this.userEmailBean = userEmailBean;
    }

Now we’re going to update the consumeSqsMethod. First let’s parse out the email body:

 //Pull our content, remember the content will be Base64 encoded as per our SES settings
        String encodedContent = jsonSqsMessage.getString("content");
        //Create a new String after decoding our body
        String decodedBody = new String(
                Base64.getDecoder().decode(encodedContent.getBytes())      
        );
        for(int i = 0 ; i < emailAddressArray.length() ; i++){
            String emailAddress = emailAddressArray.getString(i);
        }

Now let’s create a new method that will process the email address and email body.

private void processEmail(String emailAddress, String emailBody){
        
}

And finally, update the email loop to invoke this method if it finds a match.

//Loop over all sent to addresses
for(int i = 0 ; i < emailAddressArray.length() ; i++){
    String emailAddress = emailAddressArray.getString(i);
    //If we find a match, process the email and method
    if(userEmailBean.emailMatchesUserFormat(emailAddress)){
        processEmail(emailAddress, decodedBody);
    }
}

Before we implement processEmail, we need to add one more method to our UserEmailBean. We need a method for returning the userID from an email. Head back over to the UserEmailBean interface to add its last method.

   /**
     * Returns the userID from a formatted email address.
     * Returns null if no userID is found. 
     * @param emailAddress Formatted email address, this address should be verified using {@link #emailMatchesUserFormat(String)}
     * @return The originating userID if found, null if not
     */
    Long userIDFromEmail(String emailAddress);

The goal of this method will be to return the userID from a formatted address. The implementation will be similar to our verification method. Let’s head over to UserEmailBeanJSONImpl and fill in this method.

   @Override
    public Long userIDFromEmail(String emailAddress) {
        String emailPart = splitEmail(emailAddress);
        try {
            //Attempt to decode our email
            UserDetails userDetails = objectMapper.readValue(Base64.getDecoder().decode(emailPart), UserDetails.class);
            if(userDetails == null || userDetails.getUserID() == null){
                //We couldn't find a userID
                return null;
            }
            //ID found, return it
            return userDetails.getUserID();
        } catch (IllegalArgumentException | IOException e) {
            //The Base64 decoder will throw an IllegalArgumentException it the input string is not Base64 formatted
            //Jackson will throw an IOException if it can't read the string into the UserDetails class
            //Return null since we didn't find a userID
            return null;
        }
    }

Now head back over to our EmailSqsListener and update processEmail to use this new method.

private void processEmail(String emailAddress, String emailBody){
    //Parse out the email address
    Long userID = userEmailBean.userIDFromEmail(emailAddress);
    if(userID == null){
        //Whoops, we couldn't find a userID. Abort!
        return;
    }
}

Great! Now we have almost everything we need. The last thing we need to do is parse out the reply from the raw message.

Email clients, just like web browsers from a few years ago, are plagued by the inconsistencies in their implementations.

Parsing out replies from emails is actually a fairly complicated task. Email message formats are not standardized, and the variations between different email clients can be huge. The raw response is also going to include much more than the reply and a signature. The original message will most likely be included as well. Smart people over at Mailgun put together a great blog post explaining some of the challenges. They also open sourced their machine-learning based approach to parsing emails, check it out here.

The Mailgun library is written in Python, so for our tutorial we’re going to use a simpler Java based solution. GitHub user edlio put together an MIT licensed email parser in Java based on one of GitHub’s libraries. We’re going to use this great library.

First let’s update our pom, we’re going to use https://jitpack.io to pull in EmailReplyParser.

<repositories>
	<repository>
		<id>jitpack.io</id>
		<url>https://jitpack.io</url>
	</repository>
</repositories>

Now add the GitHub dependency.

<dependency>
	<groupId>com.github.edlio</groupId>
	<artifactId>EmailReplyParser</artifactId>
	<version>v1.0</version>
</dependency>

We’re also going to use Apache commons email. We’re going to need to parse the raw email into a javax.mail MimeMessage before passing it off to the EmailReplyParser. Add the commons dependency.

<dependency>
	<groupId>org.apache.commons</groupId>
	<artifactId>commons-email</artifactId>
	<version>1.4</version>
</dependency>

Now we can head back over to our EmailSqsListener and finish up processEmail. At this point, we have the originating userID and the raw email body. The only thing left to do is parse out the reply.

To accomplish this, we’re going to use a combination of javax.mail and edlio’s EmailReplyParser.

private void processEmail(String emailAddress, String emailBody) throws Exception {
        //Parse out the email address
        Long userID = userEmailBean.userIDFromEmail(emailAddress);
        if(userID == null){
            //Whoops, we couldn't find a userID. Abort!
            return;
        }

        //Default javax.mail session
        Session session = Session.getDefaultInstance(new Properties());
        //Create a new mimeMessage out of the raw email body
        MimeMessage mimeMessage = MimeMessageUtils.createMimeMessage(
                session,
                emailBody
        );
        MimeMessageParser mimeMessageParser = new MimeMessageParser(mimeMessage);
        //Parse the message
        mimeMessageParser.parse();
        //Parse out the reply for our message
        String replyText = EmailReplyParser.parseReply(mimeMessageParser.getPlainContent());
        //Now we're done!
        //We have both the userID and the response!
        System.out.println("Processed reply for userID: " + userID + ". Reply: " + replyText);
    }

Wrap Up

And that’s it! We now have everything we need to deliver a response to the originating user!

See? I told you email can be fun!

In this article, we saw how Amazon Web Services can be used to orchestrate complex pipelines. Although in this article, the pipeline was designed around emails; these same tools can be leveraged to design even more complex systems, where you don’t have to worry about maintaining the infrastructure and can focus on the fun aspects of software engineering instead.

This article was written by Francis Altomare, a Toptal Java developer.

Go Programming Language: An Introductory Tutorial


What’s the Go Programming Language?

Go is a recent language which sits neatly in the middle of the landscape, providing lots of good features and deliberately omitting many bad ones. It compiles fast, runs fast-ish, includes a runtime and garbage collection, has a simple static type system and dynamic interfaces, and an excellent standard library.

Go and OOP

OOP is one of those features that Go deliberately omits. It has no subclassing, and so there are no inheritance diamonds or super calls or virtual methods to trip you up. Still, many of the useful parts of OOP are available in other ways.

*Mixins* are available by embedding structs anonymously, allowing their methods to be called directly on the containing struct (see embedding). Promoting methods in this way is called *forwarding*, and it’s not the same as subclassing: the method will still be invoked on the inner, embedded struct.

Embedding also doesn’t imply polymorphism. While `A` may have a `B`, that doesn’t mean it is a `B` — functions which take a `B` won’t take an `A` instead. For that, we need interfaces, which we’ll encounter briefly later.

Meanwhile, Go takes a strong position on features that can lead to confusion and bugs. It omits OOP idioms such as inheritance and polymorphism, in favor of composition and simple interfaces. It downplays exception handling in favour of explicit errors in return values. There is exactly one correct way to lay out Go code, enforced by the gofmt tool. And so on.

Go is also a great language for writing concurrent programs: programs with many independently running parts. An obvious example is a webserver: Every request runs separately, but requests often need to share resources such as sessions, caches, or notification queues. This means skilled Go programmers need to deal with concurrent access to those resources.

While the Go language has an excellent set of low-level features for handling concurrency, using them directly can become complicated. In many cases, a handful of reusable abstractions over those low-level mechanisms makes life much easier.

In today’s Go programming tutorial, we’re going to look at one such abstraction: A wrapper which can turn any data structure into a transactional service. We’ll use a Fund type as an example – a simple store for our startup’s remaining funding, where we can check the balance and make withdrawals.

In this introduction to programming in Go, we’ll build the service in small steps, making a mess along the way and then cleaning it up again. Along the way, we’ll encounter lots of cool Go features, including:

  • Struct types and methods
  • Unit tests and benchmarks
  • Goroutines and channels
  • Interfaces and dynamic typing

A Simple Fund

Let’s write some code to track our startup’s funding. The fund starts with a given balance, and money can only be withdrawn (we’ll figure out revenue later).

This graphic depicts a simple goroutine example using the Go programming language.

Go is deliberately not an object-oriented language: There are no classes, objects, or inheritance. Instead, we’ll declare a struct type called Fund, with a simple function to create new fund structs, and two public methods.

fund.go

package funding

type Fund struct {
    // balance is unexported (private), because it's lowercase
    balance int
}

// A regular function returning a pointer to a fund
func NewFund(initialBalance int) *Fund {
    // We can return a pointer to a new struct without worrying about
    // whether it's on the stack or heap: Go figures that out for us.
    return &Fund{
        balance: initialBalance,
    }
}

// Methods start with a *receiver*, in this case a Fund pointer
func (f *Fund) Balance() int {
    return f.balance
}

func (f *Fund) Withdraw(amount int) {
    f.balance -= amount
}

Testing with benchmarks

Next we need a way to test Fund. Rather than writing a separate program, we’ll use Go’s testing package, which provides a framework for both unit tests and benchmarks. The simple logic in our Fund isn’t really worth writing unit tests for, but since we’ll be talking a lot about concurrent access to the fund later on, writing a benchmark makes sense.

Benchmarks are like unit tests, but include a loop which runs the same code many times (in our case, fund.Withdraw(1)). This allows the framework to time how long each iteration takes, averaging out transient differences from disk seeks, cache misses, process scheduling, and other unpredictable factors.

The testing framework wants each benchmark to run for at least 1 second (by default). To ensure this, it will call the benchmark multiple times, passing in an increasing “number of iterations” value each time (the b.Nfield), until the run takes at least a second.

For now, our benchmark will just deposit some money and then withdraw it one dollar at a time.

fund_test.go

package funding

import "testing"

func BenchmarkFund(b *testing.B) {
    // Add as many dollars as we have iterations this run
    fund := NewFund(b.N)

    // Burn through them one at a time until they are all gone
    for i := 0; i < b.N; i++ {
        fund.Withdraw(1)
    }

    if fund.Balance() != 0 {
        b.Error("Balance wasn't zero:", fund.Balance())
    }
}

Now let’s run it:

$ go test -bench . funding
testing: warning: no tests to run
PASS
BenchmarkWithdrawals    2000000000             1.69 ns/op
ok      funding    3.576s

That went well. We ran two billion (!) iterations, and the final check on the balance was correct. We can ignore the “no tests to run” warning, which refers to the unit tests we didn’t write (in later Go programming examples in this tutorial, the warning is snipped out).

Concurrent Access

Now let’s make the benchmark concurrent, to model different users making withdrawals at the same time. To do that, we’ll spawn ten goroutines and have each of them withdraw one tenth of the money.

How would we structure muiltiple concurrent goroutines in the Go language?

Goroutines are the basic building block for concurrency in the Go language. They are green threads – lightweight threads managed by the Go runtime, not by the operating system. This means you can run thousands (or millions) of them without any significant overhead. Goroutines are spawned with the gokeyword, and always start with a function (or method call):

// Returns immediately, without waiting for `DoSomething()` to complete
go DoSomething()

Often, we want to spawn off a short one-time function with just a few lines of code. In this case we can use a closure instead of a function name:

go func() {
    // ... do stuff ...
}() // Must be a function *call*, so remember the ()

Once all our goroutines are spawned, we need a way to wait for them to finish. We could build one ourselves using channels, but we haven’t encountered those yet, so that would be skipping ahead.

For now, we can just use the WaitGroup type in Go’s standard library, which exists for this very purpose. We’ll create one (called “wg”) and call wg.Add(1) before spawning each worker, to keep track of how many there are. Then the workers will report back using wg.Done(). Meanwhile in the main goroutine, we can just say wg.Wait() to block until every worker has finished.

Inside the worker goroutines in our next example, we’ll use defer to call wg.Done().

defer takes a function (or method) call and runs it immediately before the current function returns, after everything else is done. This is handy for cleanup:

func() {
    resource.Lock()
    defer resource.Unlock()

    // Do stuff with resource
}()

This way we can easily match the Unlock with its Lock, for readability. More importantly, a deferred function will run even if there is a panic in the main function (something that we might handle via try-finally in other languages).

Lastly, deferred functions will execute in the reverse order to which they were called, meaning we can do nested cleanup nicely (similar to the C idiom of nested gotos and labels, but much neater):

func() {
    db.Connect()
    defer db.Disconnect()

    // If Begin panics, only db.Disconnect() will execute
    transaction.Begin()
    defer transaction.Close()

    // From here on, transaction.Close() will run first,
    // and then db.Disconnect()

    // ...
}()

OK, so with all that said, here’s the new version:

fund_test.go

package funding

import (
    "sync"
    "testing"
)

const WORKERS = 10

func BenchmarkWithdrawals(b *testing.B) {
    // Skip N = 1
    if b.N < WORKERS {
        return
    }

    // Add as many dollars as we have iterations this run
    fund := NewFund(b.N)

    // Casually assume b.N divides cleanly
    dollarsPerFounder := b.N / WORKERS

    // WaitGroup structs don't need to be initialized
    // (their "zero value" is ready to use).
    // So, we just declare one and then use it.
    var wg sync.WaitGroup

    for i := 0; i < WORKERS; i++ {
        // Let the waitgroup know we're adding a goroutine
        wg.Add(1)
        
        // Spawn off a founder worker, as a closure
        go func() {
            // Mark this worker done when the function finishes
            defer wg.Done()

            for i := 0; i < dollarsPerFounder; i++ {
                fund.Withdraw(1)
            }
            
        }() // Remember to call the closure!
    }

    // Wait for all the workers to finish
    wg.Wait()

    if fund.Balance() != 0 {
        b.Error("Balance wasn't zero:", fund.Balance())
    }
}

We can predict what will happen here. The workers will all execute Withdraw on top of each other. Inside it, f.balance -= amount will read the balance, subtract one, and then write it back. But sometimes two or more workers will both read the same balance, and do the same subtraction, and we’ll end up with the wrong total. Right?

$ go test -bench . funding
BenchmarkWithdrawals    2000000000             2.01 ns/op
ok      funding    4.220s

No, it still passes. What happened here?

Remember that goroutines are green threads – they’re managed by the Go runtime, not by the OS. The runtime schedules goroutines across however many OS threads it has available. At the time of writing this Go language tutorial, Go doesn’t try to guess how many OS threads it should use, and if we want more than one, we have to say so. Finally, the current runtime does not preempt goroutines – a goroutine will continue to run until it does something that suggests it’s ready for a break (like interacting with a channel).

All of this means that although our benchmark is now concurrent, it isn’t parallel. Only one of our workers will run at a time, and it will run until it’s done. We can change this by telling Go to use more threads, via the GOMAXPROCS environment variable.

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4    --- FAIL: BenchmarkWithdrawals-4
    account_test.go:39: Balance wasn't zero: 4238
ok      funding    0.007s

That’s better. Now we’re obviously losing some of our withdrawals, as we expected.

In this Go programming example, the outcome of multiple parallel goroutines is not favorable.

Make it a server

At this point we have various options. We could add an explicit mutex or read-write lock around the fund. We could use a compare-and-swap with a version number. We could go all out and use a CRDT scheme (perhaps replacing the balance field with lists of transactions for each client, and calculating the balance from those).

But we won’t do any of those things now, because they’re messy or scary or both. Instead, we’ll decide that a fund should be a server. What’s a server? It’s something you talk to. In Go, things talk via channels.

Channels are the basic communication mechanism between goroutines. Values are sent to the channel (with channel <- value), and can be received on the other side (with value = <- channel). Channels are “goroutine safe”, meaning that any number of goroutines can send to and receive from them at the same time.

Buffering

Buffering communication channels can be a performance optimization in certain circumstances, but it should be used with great care (and benchmarking!).

However, there are uses for buffered channels which aren’t directly about communication.

For instance, a common throttling idiom creates a channel with (for example) buffer size `10` and then sends ten tokens into it immediately. Any number of worker goroutines are then spawned, and each receives a token from the channel before starting work, and sends it back afterward. Then, however many workers there are, only ten will ever be working at the same time.

By default, Go channels are unbuffered. This means that sending a value to a channel will block until another goroutine is ready to receive it immediately. Go also supports fixed buffer sizes for channels (using make(chan someType, bufferSize)). However, for normal use, this is usually a bad idea.

Imagine a webserver for our fund, where each request makes a withdrawal. When things are very busy, the FundServer won’t be able to keep up, and requests trying to send to its command channel will start to block and wait. At that point we can enforce a maximum request count in the server, and return a sensible error code (like a 503 Service Unavailable) to clients over that limit. This is the best behavior possible when the server is overloaded.

Adding buffering to our channels would make this behavior less deterministic. We could easily end up with long queues of unprocessed commands based on information the client saw much earlier (and perhaps for requests which had since timed out upstream). The same applies in many other situations, like applying backpressure over TCP when the receiver can’t keep up with the sender.

In any case, for our Go example, we’ll stick with the default unbuffered behavior.

We’ll use a channel to send commands to our FundServer. Every benchmark worker will send commands to the channel, but only the server will receive them.

We could turn our Fund type into a server implementation directly, but that would be messy – we’d be mixing concurrency handling and business logic. Instead, we’ll leave the Fund type exactly as it is, and make FundServer a separate wrapper around it.

Like any server, the wrapper will have a main loop in which it waits for commands, and responds to each in turn. There’s one more detail we need to address here: The type of the commands.

A diagram of the fund being used as the server in this Go programming tutorial.

Pointers

We could have made our commands channel take *pointers* to commands (`chan *TransactionCommand`). Why didn’t we?

Passing pointers between goroutines is risky, because either goroutine might modify it. It’s also often less efficient, because the other goroutine might be running on a different CPU core (meaning more cache invalidation).

Whenever possible, prefer to pass plain values around.

In the next section below, we’ll be sending several different commands, each with its own struct type. We want the server’s Commands channel to accept any of them. In an OOP language we might do this via polymorphism: Have the channel take a superclass, of which the individual command types were subclasses. In Go, we use interfaces instead.

An interface is a set of method signatures. Any type that implements all of those methods can be treated as that interface (without being declared to do so). For our first run, our command structs won’t actually expose any methods, so we’re going to use the empty interface, interface{}. Since it has no requirements, any value (including primitive values like integers) satisfies the empty interface. This isn’t ideal – we only want to accept command structs – but we’ll come back to it later.

For now, let’s get started with the scaffolding for our server:

server.go

package funding

type FundServer struct {
    Commands chan interface{}
    fund Fund
}

func NewFundServer(initialBalance int) *FundServer {
    server := &FundServer{
        // make() creates builtins like channels, maps, and slices
        Commands: make(chan interface{}),
        fund: NewFund(initialBalance),
    }

    // Spawn off the server's main loop immediately
    go server.loop()
    return server
}

func (s *FundServer) loop() {
    // The built-in "range" clause can iterate over channels,
    // amongst other things
    for command := range s.Commands {
    
        // Handle the command
        
    }
}

Now let’s add a couple of struct types for the commands:

type WithdrawCommand struct {
    Amount int
}

type BalanceCommand struct {
    Response chan int
}

The WithdrawCommand just contains the amount to withdraw. There’s no response. The BalanceCommand does have a response, so it includes a channel to send it on. This ensures that responses will always go to the right place, even if our fund later decides to respond out-of-order.

Now we can write the server’s main loop:

func (s *FundServer) loop() {
    for command := range s.Commands {

        // command is just an interface{}, but we can check its real type
        switch command.(type) {

        case WithdrawCommand:
            // And then use a "type assertion" to convert it
            withdrawal := command.(WithdrawCommand)
            s.fund.Withdraw(withdrawal.Amount)

        case BalanceCommand:
            getBalance := command.(BalanceCommand)
            balance := s.fund.Balance()
            getBalance.Response <- balance

        default:
            panic(fmt.Sprintf("Unrecognized command: %v", command))
        }
    }
}

Hmm. That’s sort of ugly. We’re switching on the command type, using type assertions, and possibly crashing. Let’s forge ahead anyway and update the benchmark to use the server.

func BenchmarkWithdrawals(b *testing.B) {
    // ...

    server := NewFundServer(b.N)

    // ...

    // Spawn off the workers
    for i := 0; i < WORKERS; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for i := 0; i < dollarsPerFounder; i++ {
                server.Commands <- WithdrawCommand{ Amount: 1 }
            }
        }()
    }

    // ...

    balanceResponseChan := make(chan int)
    server.Commands <- BalanceCommand{ Response: balanceResponseChan }
    balance := <- balanceResponseChan

    if balance != 0 {
        b.Error("Balance wasn't zero:", balance)
    }
}

That was sort of ugly too, especially when we checked the balance. Never mind. Let’s try it:

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4     5000000           465 ns/op
ok      funding    2.822s

Much better, we’re no longer losing withdrawals. But the code is getting hard to read, and there are more serious problems. If we ever issue a BalanceCommand and then forget to read the response, our fund server will block forever trying to send it. Let’s clean things up a bit.

Make it a service

A server is something you talk to. What’s a service? A service is something you talk to with an API. Instead of having client code work with the command channel directly, we’ll make the channel unexported (private) and wrap the available commands up in functions.

type FundServer struct {
    commands chan interface{} // Lowercase name, unexported
    // ...
}

func (s *FundServer) Balance() int {
    responseChan := make(chan int)
    s.commands <- BalanceCommand{ Response: responseChan }
    return <- responseChan
}

func (s *FundServer) Withdraw(amount int) {
    s.commands <- WithdrawCommand{ Amount: amount }
}

Now our benchmark can just say server.Withdraw(1) and balance := server.Balance(), and there’s less chance of accidentally sending it invalid commands or forgetting to read responses.

Here is what using the fund as a service might look like in this sample Go language program.

There’s still a lot of extra boilerplate for the commands, but we’ll come back to that later.

Transactions

Eventually, the money always runs out. Let’s agree that we’ll stop withdrawing when our fund is down to its last ten dollars, and spend that money on a communal pizza to celebrate or commiserate around. Our benchmark will reflect this:

// Spawn off the workers
for i := 0; i < WORKERS; i++ {
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < dollarsPerFounder; i++ {

            // Stop when we're down to pizza money
            if server.Balance() <= 10 {
                break
            }
            server.Withdraw(1)
        }
    }()
}

// ...

balance := server.Balance()
if balance != 10 {
    b.Error("Balance wasn't ten dollars:", balance)
}

This time we really can predict the result.

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4    --- FAIL: BenchmarkWithdrawals-4
    fund_test.go:43: Balance wasn't ten dollars: 6
ok      funding    0.009s

We’re back where we started – several workers can read the balance at once, and then all update it. To deal with this we could add some logic in the fund itself, like a minimumBalance property, or add another command called WithdrawIfOverXDollars. These are both terrible ideas. Our agreement is amongst ourselves, not a property of the fund. We should keep it in application logic.

What we really need is transactions, in the same sense as database transactions. Since our service executes only one command at a time, this is super easy. We’ll add a Transact command which contains a callback (a closure). The server will execute that callback inside its own goroutine, passing in the raw Fund. The callback can then safely do whatever it likes with the Fund.

Semaphores and errors

In this next example we’re doing two small things wrong.

First, we’re using a `Done` channel as a semaphore to let calling code know when its transaction has finished. That’s fine, but why is the channel type `bool`? We’ll only ever send `true` into it to mean “done” (what would sending `false` even mean?). What we really want is a single-state value (a value that has no value?). In Go, we can do this using the empty struct type: `struct{}`. This also has the advantage of using less memory. In the example we’ll stick with `bool` so as not to look too scary.

Second, our transaction callback isn’t returning anything. As we’ll see in a moment, we can get values out of the callback into calling code using scope tricks. However, transactions in a real system would presumably fail sometimes, so the Go convention would be to have the transaction return an `error` (and then check whether it was `nil` in calling code).

We’re not doing that either for now, since we don’t have any errors to generate.

// Typedef the callback for readability
type Transactor func(fund *Fund)

// Add a new command type with a callback and a semaphore channel
type TransactionCommand struct {
    Transactor Transactor
    Done chan bool
}

// ...

// Wrap it up neatly in an API method, like the other commands
func (s *FundServer) Transact(transactor Transactor) {
    command := TransactionCommand{
        Transactor: transactor,
        Done: make(chan bool),
    }
    s.commands <- command
    <- command.Done
}

// ...

func (s *FundServer) loop() {
    for command := range s.commands {
        switch command.(type) {
        // ...

        case TransactionCommand:
            transaction := command.(TransactionCommand)
            transaction.Transactor(s.fund)
            transaction.Done <- true

        // ...
        }
    }
}

Our transaction callbacks don’t directly return anything, but the Go language makes it easy to get values out of a closure directly, so we’ll do that in the benchmark to set the pizzaTime flag when money runs low:

pizzaTime := false
for i := 0; i < dollarsPerFounder; i++ {

    server.Transact(func(fund *Fund) {
        if fund.Balance() <= 10 {
            // Set it in the outside scope
            pizzaTime = true
            return
        }
        fund.Withdraw(1)
    })

    if pizzaTime {
        break
    }
}

And check that it works:

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4     5000000           775 ns/op
ok      funding    4.637s

Nothing But transactions

You may have spotted an opportunity to clean things up some more now. Since we have a generic Transactcommand, we don’t need WithdrawCommand or BalanceCommand anymore. We’ll rewrite them in terms of transactions:

func (s *FundServer) Balance() int {
    var balance int
    s.Transact(func(f *Fund) {
        balance = f.Balance()
    })
    return balance
}

func (s *FundServer) Withdraw(amount int) {
    s.Transact(func (f *Fund) {
        f.Withdraw(amount)
    })
}

Now the only command the server takes is TransactionCommand, so we can remove the whole interface{}mess in its implementation, and have it accept only transaction commands:

type FundServer struct {
    commands chan TransactionCommand
    fund *Fund
}

func (s *FundServer) loop() {
    for transaction := range s.commands {
        // Now we don't need any type-switch mess
        transaction.Transactor(s.fund)
        transaction.Done <- true
    }
}

Much better.

There’s a final step we could take here. Apart from its convenience functions for Balance and Withdraw, the service implementation is no longer tied to Fund. Instead of managing a Fund, it could manage an interface{} and be used to wrap anything. However, each transaction callback would then have to convert the interface{} back to a real value:

type Transactor func(interface{})

server.Transact(func(managedValue interface{}) {
    fund := managedValue.(*Fund)
    // Do stuff with fund ...
})

This is ugly and error-prone. What we really want is compile-time generics, so we can “template” out a server for a particular type (like *Fund).

Unfortunately, Go doesn’t support generics – yet. It’s expected to arrive eventually, once someone figures out some sensible syntax and semantics for it. In the meantime, careful interface design often removes the need for generics, and when they don’t we can get by with type assertions (which are checked at runtime).

So we’re done, right?

Yes.

Well, okay, no.

For instance:

  • A panic in a transaction will kill the whole service.
  • There are no timeouts. A transaction that never returns will block the service forever.
  • If our Fund grows some new fields and a transaction crashes halfway through updating them, we’ll have inconsistent state.
  • Transactions are able to leak the managed Fund object, which isn’t good.
  • There’s no reasonable way to do transactions across multiple funds (like withdrawing from one and depositing in another). We can’t just nest our transactions because it would allow deadlocks.
  • Running a transaction asynchronously now requires a new goroutine and a lot of messing around. Relatedly, we probably want to be able to read the most recent Fund state from elsewhere while a long-running transaction is in progress.

In the next Go programming tutorial, we’ll look at some ways to address these issues.

This article was written by Brendon Hogger, a Toptal Python developer.

Introduction to Kotlin: Android Programming For Humans


In a perfect Android world, the main language of Java is really modern, clear, and elegant. You can write less by doing more, and whenever a new feature appears, developers can use it just by increasing version in Gradle. Then while creating a very nice app, it appears fully testable, extensible, and maintainable. Our activities are not too large and complicated, we can change data sources from database to web without tons of differences, and so on. Sounds great, right? Unfortunately, the Android world isn’t this ideal. Google is still striving for perfection, but we all know that ideal worlds don’t exist. Thus, we have to help ourselves in that great journey in the Android world.

Can Kotlin replace Java?

Kotlin is a popular new player in the Android world. But can it ever replace Java?

What Is Kotlin, and Why Should You Use It?

So, the first language. I think that Java isn’t the master of elegance or clarity, and it is neither modern nor expressive (and I’m guessing you agree). The disadvantage is that below Android N, we are still limited to Java 6 (including some small parts of Java 7). Developers can also attach RetroLambda to use lambda expressions in their code, which is very useful while using RxJava. Above Android N, we can use some of Java 8’s new functionalities, but it’s still that old, heavy Java. Very often I hear Android developers say “I wish Android supported a nicer language, like iOS does with Swift”. And what if I told you that you can use a very nice, simple language, with null safety, lambdas, and many other nice new features? Welcome to Kotlin.

What is Kotlin?

Kotlin is a new language (sometimes referred to as Swift for Android), developed by the JetBrains team, and is now in its 1.0.2 version. What makes it useful in Android development is that it compiles to JVM bytecode, and can be also compiled to JavaScript. It is fully compatible with Java, and Kotlin code can be simply converted to Java code and vice versa (there is a plugin from JetBrains). That means Kotlin can use any framework, library etc. written in Java. On Android, it integrates by Gradle. If you have an existing Android app and you want to implement a new feature in Kotlin without rewriting the whole app, just start writing in Kotlin, and it will work.

But what are the ‘great new features’? Let me list a few:

Optional and Named Function Parameters

fun createDate(day: Int, month: Int, year: Int, hour: Int = 0, minute: Int = 0, second: Int = 0) {
   print("TEST", "$day-$month-$year $hour:$minute:$second")
}

We Can Call Method createDate in Different Ways

createDate(1,2,2016) prints:  ‘1-2-2016 0:0:0’
createDate(1,2,2016, 12) prints: ‘1-2-2016 12:0:0’
createDate(1,2,2016, minute = 30) prints: ‘1-2-2016 0:30:0’

Null Safety

If a variable can be null, code will not compile unless we force them to make it. The following code will have an error – nullableVar may be null:

var nullableVar: String? = “”;
nullableVar.length;

To compile, we have to check if it’s not null:

if(nullableVar){
	nullableVar.length
}

Or, shorter:

nullableVar?.length

This way, if nullableVar is null, nothing happens. Otherwise, we can mark variable as not nullable, without a question mark after type:

var nonNullableVar: String = “”;
nonNullableVar.length;

This code compiles, and if we want to assign null to nonNullableVar, compiler will show an error.

There is also very useful Elvis operator:

var stringLength = nullableVar?.length ?: 0 

Then, when nullableVar is null (so nullableVar?.length returns null), stringLength will have value 0.

Mutable and Immutable Variables

In the example above, I use var when defining a variable. This is mutable, we can reassign it whenever we want. If we want that variable to be immutable (in many cases we should), we use val (as value, not variable):

val immutable: Int = 1

After that, the compiler will not allow us to reassign to immutable.

Lambdas

We all know what a lambda is, so here I will just show how we can use it in Kotlin:

button.setOnClickListener({ view -> Log.d("Kotlin","Click")})

Or if the function is the only or last argument:

button.setOnClickListener { Log.d("Kotlin","Click")}

Extensions

Extensions are a very helpful language feature, thanks to which we can “extend” existing classes, even when they are final or we don’t have access to their source code.

For example, to get a string value from edit text, instead of writing every time editText.text.toString() we can write the function:

fun EditText.textValue(): String{
   return text.toString()
}

Or shorter:

fun EditText.textValue() = text.toString()

And now, with every instance of EditText:

editText.textValue()

Or, we can add a property returning the same:

var EditText.textValue: String
   get() = text.toString()
   set(v) {setText(v)}

Operator Overloading

Sometimes useful if we want to add, multiply, or compare objects. Kotlin allows overloading of binary operators (plus, minus, plusAssign, range, etc.), array operators ( get, set, get range, set range), and equals and unary operations (+a, -a, etc.)

Data Class

How many lines of code do you need to implement a User class in Java with three properties: copy, equals, hashCode, and toString? In Kaotlin you need only one line:

data class User(val name: String, val surname: String, val age: Int)

This data class provides equals(), hashCode() and copy() methods, and also toString(), which prints User as:

User(name=John, surname=Doe, age=23)

Data classes also provide some other useful functions and properties, which you can see in Kotlin documentation.

Anko Extensions

You use Butterknife or Android extensions, don’t you? What if you don’t need to even use this library, and after declaring views in XML just use it from code by its ID (like with XAML in C#):

<Button
        android:id="@+id/loginBtn"
        android:layout_width="match_parent"
        android:layout_height="wrap_content" />
loginBtn.setOnClickListener{}

Kotlin has very useful Anko extensions, and with this you don’t need to tell your activity what is loginBtn, it knows it just by “importing” xml:

import kotlinx.android.synthetic.main.activity_main.*

There are many other useful things in Anko, including starting activities, showing toasts, and so on. This is not the main goal of Anko – it is designed for easily creating layouts from code. So if you need to create a layout programmatically, this is the best way.

This is only a short view of Kotlin. I recommend reading Antonio Leiva’s blog and his book – Kotlin for Android Developers, and of course the official Kotlin site.

What Is MVP and Why?

A nice, powerful, and clear language is not enough. It’s very easy to write messy apps with every language without good architecture. Android developers (mostly ones who are getting started, but also more advanced ones) often give Activity responsibility for everything around them. Activity (or Fragment, or other view) downloads data, sends to save, presents them, responds to user interactions, edits data, manages all child views . . . and often much more. It’s too much for such unstable objects like Activities or Fragments (it’s enough to rotate the screen and Activity says ‘Goodbye….’).

A very good idea is to isolate responsibilities from views and make them as stupid as possible. Views (Activities, Fragments, custom views, or whatever presents data on screen) should be only responsible for managing their subviews. Views should have presenters, who will communicate with model, and tell them what they should do. This, in short, is the Model-View-Presenter pattern (for me, it should be named Model-Presenter-View to show connections between layers).

MVC vs MVP

“Hey, I know something like that, and it’s called MVC!” – didn’t you think? No, MVP is not the same as MVC. In the MVC pattern, your view can communicate with model. While using MVP, you don’t allow any communication between these two layers – the only way View can communicate with Model is through Presenter. The only thing that View knows about Model can be the data structure. View knows how to, for example, display User, but doesn’t know when. Here’s a simple example:

View knows “I’m Activity, I have two EditTexts and one Button. When somebody clicks the button, I should tell it to my presenter, and pass him EditTexts’ values. And that’s all, I can sleep until next click or presenter tells me what to do.”

Presenter knows that somewhere is a View, and he knows what operations this View can perform. He also knows that when he receives two strings, he should create User from these two strings and send data to model to save, and if save is successful, tell the view ‘Show success info’.

Model just knows where data is, where they should be saved, and what operations should be performed on the data.

Applications written in MVP are easy to test, maintain, and reuse. A pure presenter should know nothing about the Android platform. It should be pure Java (or Kotlin, in our case) class. Thanks to this we can reuse our presenter in other projects. We can also easily write unit tests, testing separately Model, View and Presenter.

MVP pattern leads to better, less complex codebase by keeping user interface and business logic truly separate.

A little digression: MVP should be a part of Uncle Bob’s Clean Architecture to make applications even more flexible and nicely architectured. I’ll try to write about that next time.

Sample App with MVP and Kotlin

That’s enough theory, let’s see some code! Okay, let’s try to create a simple app. The main goal for this app is to create user. First screen will have two EditTexts (Name and Surname) and one Button (Save). After entering name and surname and clicking ‘Save’, the app should show ‘User is saved’ and go to the next screen, where saved name and surname is presented. When name or surname is empty, the app should not save user and show an error indicating what’s wrong.

The first thing after creating Android Studio project is to configure Kotlin. You should install Kotlin plugin, and, after restart, in Tools > Kotlin you can click ‘Configure Kotlin in Project’. IDE will add Kotlin dependencies to Gradle. If you have any existing code, you can easily convert it to Kotlin by (Ctrl+Shift+Alt+K or Code > Convert Java file to Kotlin). If something is wrong and the project does not compile, or Gradle does not see Kotlin, you can check code of the app available on GitHub.

Kotlin not only interoperates well with Java frameworks and libraries, it allows you to continue using most of the same tools that you are already familiar with.

Now that we have a project, let’s start by creating our first view – CreateUserView. This view should have the functionalities mentioned earlier, so we can write an interface for that:

interface CreateUserView : View {
   fun showEmptyNameError() /* show error when name is empty */
   fun showEmptySurnameError() /* show error when surname is empty */
   fun showUserSaved() /* show user saved info */
   fun showUserDetails(user: User) /* show user details */
}

As you can see, Kotlin is similar to Java in declaring functions. All of those are functions that return nothing, and the last have one parameter. This is the difference, parameter type comes after name. The View interface is not from Android – it’s our simple, empty interface:

interface View

Basic Presenter’s interface should have a property of View type, and at least on method (onDestroy for example), where this property will be set to null:

interface Presenter<T : View> {
   var view: T?

   fun onDestroy(){
       view = null
   }
}

Here you can see another Kotlin feature – you can declare properties in interfaces, and also implement methods there.

Our CreateUserView needs to communicate with CreateUserPresenter. The only additional function that this Presenter needs is saveUser with two string arguments:

interface CreateUserPresenter<T : View>: Presenter<T> {
   fun saveUser(name: String, surname: String)
}

We also need Model definition – it’s mentioned earlier data class:

data class User(val name: String, val surname: String)

After declaring all interfaces, we can start implementing.

CreateUserPresenter will be implemented in CreateUserPresenterImpl:

class CreateUserPresenterImpl(override var view: CreateUserView?): CreateUserPresenter<CreateUserView> {

   override fun saveUser(name: String, surname: String) {
   }
}

The first line, with class definition:

CreateUserPresenterImpl(override var view: CreateUserView?)

Is a constructor, we use it for assigning view property, defined in interface.

MainActivity, which is our CreateUserView implementation, needs a reference to CreateUserPresenter:

class MainActivity : AppCompatActivity(), CreateUserView {

   private val presenter: CreateUserPresenter<CreateUserView> by lazy {
       CreateUserPresenterImpl(this)
   }

   override fun onCreate(savedInstanceState: Bundle?) {
       super.onCreate(savedInstanceState)
       setContentView(R.layout.activity_main)

       saveUserBtn.setOnClickListener{
           presenter.saveUser(userName.textValue(), userSurname.textValue()) /*use of textValue() extension, mentioned earlier */


       }
   }

   override fun showEmptyNameError() {
       userName.error = getString(R.string.name_empty_error) /* it's equal to userName.setError() - Kotlin allows us to use property */

   }

   override fun showEmptySurnameError() {
       userSurname.error = getString(R.string.surname_empty_error)
   }

   override fun showUserSaved() {
       toast(R.string.user_saved) /* anko extension - equal to Toast.makeText(this, R.string.user_saved, Toast.LENGTH_LONG) */

   }

   override fun showUserDetails(user: User) {
      
   }

override fun onDestroy() {
   presenter.onDestroy()
}
}

At the beginning of the class, we defined our presenter:

private val presenter: CreateUserPresenter<CreateUserView> by lazy {
       CreateUserPresenterImpl(this)
}

It is defined as immutable (val), and is created by lazy delegate, which will be assigned the first time it is needed. Moreover, we are sure that it will not be null (no question mark after definition).

When the User clicks on the Save button, View sends information to Presenter with EditTexts values. When that happens, User should be saved, so we have to implement saveUser method in Presenter (and some of the Model’s functions):

override fun saveUser(name: String, surname: String) {
   val user = User(name, surname)
   when(UserValidator.validateUser(user)){
       UserError.EMPTY_NAME -> view?.showEmptyNameError()
       UserError.EMPTY_SURNAME -> view?.showEmptySurnameError()
       UserError.NO_ERROR -> {
           UserStore.saveUser(user)
view?.showUserSaved()
           view?.showUserDetails(user)
       }
   }
}

When a User is created, it is sent to UserValidator to check for validity. Then, according to the result of validation, proper method is called. The when() {} construct is same as switch/case in Java. But it is more powerful – Kotlin allows use of not only enum or int in ‘case’, but also ranges, strings or object types. It must contain all possibilities or have an else expression. Here, it covers all UserError values.

By using view?.showEmptyNameError() (with a question mark after view), we are protected from NullPointer. View can be nulled in onDestroy method, and with this construction, nothing will happen.

When a User model has no errors, it tells UserStore to save it, and then instruct View to show success and show details.

As mentioned earlier, we have to implement some model things:

enum class UserError {
   EMPTY_NAME,
   EMPTY_SURNAME,
   NO_ERROR
}

object UserStore {
   fun saveUser(user: User){
       //Save user somewhere: Database, SharedPreferences, send to web...
   }
}

object UserValidator {

   fun validateUser(user: User): UserError {
       with(user){
           if(name.isNullOrEmpty()) return UserError.EMPTY_NAME
           if(surname.isNullOrEmpty()) return UserError.EMPTY_SURNAME
       }

       return UserError.NO_ERROR
   }
}

The most interesting thing here is UserValidator. By using the object word, we can create a singleton class, with no worries about threads, private constructors and so on.

Next thing – in validateUser(user) method, there is with(user) {} expression. Code within such block is executed in context of object, passed in with name and surname are User’s properties.

There is also another little thing. All above code, from enum to UserValidator, definition is placed in one file (definition of User class is also here). Kotlin does not force you to have each public class in single file (or name class exactly as file). Thus, if you have some short pieces of related code (data classes, extensions, functions, constants – Kotlin doesn’t require class for function or constant), you can place it in one single file instead of spreading through all files in project.

When a user is saved, our app should display that. We need another View – it can be any Android view, custom View, Fragment or Activity. I chose Activity.

So, let’s define UserDetailsView interface. It can show user, but it should also show an error when user is not present:

interface UserDetailsView {
   fun showUserDetails(user: User)
   fun showNoUserError()
}

Next, UserDetailsPresenter. It should have a user property:

interface UserDetailsPresenter<T: View>: Presenter<T> {
   var user: User?
}

This interface will be implemented in UserDetailsPresenterImpl. It has to override user property. Every time this property is assigned, user should be refreshed on the view. We can use a property setter for this:

class UserDetailsPresenterImpl(override var view: UserDetailsView?): UserDetailsPresenter<UserDetailsView> {
   override var user: User? = null
       set(value) {
           field = value
           if(field != null){
               view?.showUserDetails(field!!)
           } else {
               view?.showNoUserError()
           }
       }

}

UserDetailsView implementation, UserDetailsActivity, is very simple. Just like before, we have a presenter object created by lazy loading. User to display should be passed via intent. There is one little problem with this for now, and we will solve it in a moment. When we have user from intent, View needs to assign it to their presenter. After that, user will be refreshed on the screen, or, if it is null, the error will appear (and activity will finish – but presenter doesn’t know about that):

class UserDetailsActivity: AppCompatActivity(), UserDetailsView {

   private val presenter: UserDetailsPresenter<UserDetailsView> by lazy {
       UserDetailsPresenterImpl(this)
   }

   override fun onCreate(savedInstanceState: Bundle?) {
       super.onCreate(savedInstanceState)
       setContentView(R.layout.activity_user_details)

       val user = intent.getParcelableExtra<User>(USER_KEY)
       presenter.user = user
   }

   override fun showUserDetails(user: User) {
       userFullName.text = "${user.name} ${user.surname}"
   }

   override fun showNoUserError() {
       toast(R.string.no_user_error)
       finish()
   }

override fun onDestroy() {
   presenter.onDestroy()
}

}

Passing objects via intents requires that this object implements Parcelable interface. This is very ‘dirty’ work. Personally, I hate doing this because of all of the CREATORs, properties, saving, restoring, and so on. Fortunately, there is proper plugin, Parcelable for Kotlin. After installing it, we can generate Parcelable just with one click.

The last thing to do is to implement showUserDetails(user: User) in our MainActivity:

override fun showUserDetails(user: User) {
   startActivity<UserDetailsActivity>(USER_KEY to user) /* anko extension - starts UserDetailsActivity and pass user as USER_KEY in intent */
}

And that’s all.

Demo Android app in Kotlin

We have a simple app that saves a User (actually, it is not saved, but we can add this functionality without touching presenter or view) and presents it on the screen. In the future, if we want to change the way user is presented on the screen, such as from two activities to two fragments in one activity, or two custom views, changes will be only in View classes. Of course, if we don’t change functionality or model’s structure. Presenter, who doesn’t know what exactly View is, won’t need any changes.

What’s Next?

In our app, Presenter is created every time an activity is created. This approach, or its opposite, if Presenter should persist across activity instances, is a subject of much discussion across the Internet. For me, it depends on the app, its needs, and the developer. Sometimes it’s better is to destroy presenter, sometimes not. If you decide to persist one, a very interesting technique is to use LoaderManager for that.

As mentioned before, MVP should be a part of Uncle Bob’s Clean architecture. Moreover, good developers should use Dagger to inject presenters dependencies to activities. It also helps maintain, test, and reuse code in future. Currently, Kotlin works very well with Dagger (before the official release it wasn’t so easy), and also with other helpful Android libraries.

Wrap Up

For me, Kotlin is a great language. It’s modern, clear, and expressive while still being developed by great people. And we can use any new release on any Android device and version. Whatever makes me angry at Java, Kotlin improves.

Of course, as I said nothing is ideal. Kotlin also have some disadvantages. The newest gradle plugin versions (mainly from alpha or beta) don’t work well with this language. Many people complain that build time is a bit longer than pure Java, and apks have some additional MB’s. But Android Studio and Gradle are still improving, and phones have more and more space for apps. That’s why I believe Kotlin can be a very nice language for every Android developer. Just give it a try, and share in the comments section below what you think.

Source code of the sample app is available on Github: github.com/tomaszczura/AndroidMVPKotlin

This article was written by Tomasz Czura, a Toptal Java developer.

Tips And Tools For Optimizing Android Apps


Android devices have a lot of cores, so writing smooth apps is a simple task for anyone, right? Wrong. As everything on Android can be done in a lot of different ways, picking the best option can be tough. If you want to choose the most efficient method, you have to know what’s happening under the hood. Luckily, you don’t have to rely on your feelings or sense of smell, since there’s a lot of tools out there that can help you find bottlenecks by measuring and describing what’s going on. Properly optimized and smooth apps greatly improve the user experience, and also drain less battery.

Let’s see some numbers first to consider how important optimization really is. According to a Nimbledroid post, 86% of users (including me) have uninstalled apps after using them only once due to poor performance. If you’re loading some content, you have less than 11 seconds to show it to the user. Only every third user will give you more time. You might also get a lot of bad reviews on Google Play because of it.

Build Better Apps: Android Performance Patterns

Testing your users’ patience is a shortcut to uninstallation.

The first thing every user notices over and over is the app’s startup time. According to another Nimbledroid post, out of the 100 top apps, 40 start in under 2 seconds, and 70 start in under 3 seconds. So if possible, you should generally display some content as soon as possible and delay the background checks and updates a bit.

Always remember, premature optimization is the root of all evil. You should also not waste too much time with micro optimization. You will see the most benefit of optimizing code that runs often. For example, this includes the onDraw() function, which runs every frame, ideally 60 times per second. Drawing is the slowest operation out there, so try redrawing only what you have to. More about this will come later.

Performance Tips

Enough theory, here is a list of some of the things you should consider if performance matters to you.

1. String vs StringBuilder

Let’s say that you have a String, and for some reason you want to append more Strings to it 10 thousand times. The code could look something like this.

String string = "hello";
for (int i = 0; i < 10000; i++) {
    string += " world";
}

You can see on the Android Studio Monitors how inefficient some String concatenation can be. There’s tons of Garbage Collections (GC) going on.

String vs StringBuilder

This operation takes around 8 seconds on my fairly good device, which has Android 5.1.1. The more efficient way of achieving the same goal is using a StringBuilder, like this.

StringBuilder sb = new StringBuilder("hello");
for (int i = 0; i < 10000; i++) {
    sb.append(" world");
}
String string = sb.toString();

On the same device this happens almost instantly, in less than 5ms. The CPU and Memory visualizations are almost totally flat, so you can imagine how big this improvement is. Notice though, that for achieving this difference, we had to append 10 thousand Strings, which you probably don’t do often. So in case you are adding just a couple Strings once, you will not see any improvement. By the way, if you do:

String string = "hello" + " world";

It gets internally converted to a StringBuilder, so it will work just fine.

You might be wondering, why is concatenating Strings the first way so slow? It is caused by the fact that Strings are immutable, so once they are created, they cannot be changed. Even if you think you are changing the value of a String, you are actually creating a new String with the new value. In an example like:

String myString = "hello";
myString += " world";

What you will get in memory is not 1 String “hello world”, but actually 2 Strings. The String myString will contain “hello world”, as you would expect. However, the original String with the value “hello” is still alive, without any reference to it, waiting to be garbage collected. This is also the reason why you should store passwords in a char array instead of a String. If you store a password as a String, it will stay in the memory in human-readable format until the next GC for an unpredictable length of time. Back to the immutability described above, the String will stay in the memory even if you assign it another value after using it. If you, however, empty the char array after using the password, it will disappear from everywhere.

2. Picking the Correct Data Type

Before you start writing code, you should decide what data types you will use for your collection. For example, should you use a Vector or an ArrayList? Well, it depends on your usecase. If you need a thread-safe collection, which will allow only one thread at once to work with it, you should pick a Vector, as it is synchronized. In other cases you should probably stick to an ArrayList, unless you really have a specific reason to use vectors.

How about the case when you want a collection with unique objects? Well, you should probably pick a Set. They cannot contain duplicates by design, so you will not have to take care of it yourself. There are multiple types of sets, so pick one that fits your use case. For a simple group of unique items, you can use a HashSet. If you want to preserve the order of items in which they were inserted, pick a LinkedHashSet. A TreeSet sorts items automatically, so you will not have to call any sorting methods on it. It should also sort the items efficiently, without you having to think of sorting algorithms.

Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
— Rob Pike’s 5 Rules of Programming

Sorting integers or strings is pretty straightforward. However, what if you want to sort a class by some property? Let’s say you are writing a list of meals you eat, and store their names and timestamps. How would you sort the meals by timestamp from the lowest to highest? Luckily, it’s pretty simple. It’s enough to implement the Comparable interface in the Meal class and override the compareTo() function. To sort the meals by lowest timestamp to highest, we could write something like this.

@Override
public int compareTo(Object object) {
    Meal meal = (Meal) object;
    if (this.timestamp < meal.getTimestamp()) {
        return -1;
    } else if (this.timestamp > meal.getTimestamp()) {
        return 1;
    }
    return 0;
}

3. Location Updates

There are a lot of apps out there which collect the user’s location. You should use the Google Location Services API for that purpose, which contains a lot of useful functions. There is a separate article about using it, so I will not repeat it.

I’d just like to stress some important points from a performance perspective.

First of all, use only the most precise location as you need. For example, if you are doing some weather forecasting, you don’t need the most accurate location. Getting just a very rough area based on the network is faster, and more battery efficient. You can achieve it by setting the priority to LocationRequest.PRIORITY_LOW_POWER.

You can also use a function of LocationRequest called setSmallestDisplacement(). Setting this in meters will cause your app to not be notified about location change if it was smaller than the given value. For example, if you have a map with nearby restaurants around you, and you set the smallest displacement to 20 meters, the app will not be making requests for checking restaurants if the user is just walking around in a room. The requests would be useless, as there wouldn’t be any new nearby restaurant anyway.

The second rule is requesting location updates only as often as you need them. This is quite self explanatory. If you are really building that weather forecast app, you do not need to request the location every few seconds, as you probably don’t have such precise forecasts (contact me if you do). You can use the setInterval() function for setting the required interval in which the device will be updating your app about the location. If multiple apps keep requesting the user’s location, every app will be notified at every new location update, even if you have a higher setInterval() set. To prevent your app from being notified too often, make sure to always set a fastest update interval with setFastestInterval().

And finally, the third rule is requesting location updates only if you need them. If you are displaying some nearby objects on the map every x seconds and the app goes in background, you do not need to know the new location. There is no reason to update the map if the user cannot see it anyway. Make sure to stop listening for location updates when appropriate, preferably in onPause(). You can then resume the updates in onResume().

4. Network Requests

There is a high chance that your app is using the internet for downloading or uploading data. If it is, you have several reasons to pay attention to handling network requests. One of them is mobile data, which is very limited to a lot of people and you shouldn’t waste it.

The second one is battery. Both WiFi and mobile networks can consume quite a lot of it if they are used too much. Let’s say that you want to download 1 kb. To make a network request, you have to wake up the cellular or WiFi radio, then you can download your data. However, the radio will not fall asleep immediately after the operation. It will stay in a fairly active state for about 20-40 more seconds, depending on your device and carrier.

Network Requests

So, what can you do about it? Batch. To avoid waking up the radio every couple seconds, prefetch things that the user might need in the upcoming minutes. The proper way of batching is highly dynamic depending on your app, but if it is possible, you should download the data the user might need in the next 3-4 minutes. One could also edit the batch parameters based on the user’s internet type, or charging state. For example, if the user is on WiFi while charging, you can prefetch a lot more data than if the user is on mobile internet with low battery. Taking all these variables into account can be a tough thing, which only few people would do. Luckily though, there is GCM Network Manager to the rescue!

GCM Network Manager is a really helpful class with a lot of customizable attributes. You can easily schedule both repeating and one-off tasks. At repeating tasks you can set the lowest, as well as the highest repeat interval. This will allow batching not only your requests, but also requests from other apps. The radio has to be woken up only once per some period, and while it’s up, all apps in the queue download and upload what they are supposed to. This Manager is also aware of the device’s network type and charging state, so you can adjust accordingly. You can find more details and samples in this article, I urge you to check it out. An example task looks like this:

Task task = new OneoffTask.Builder()
    .setService(CustomService.class)
    .setExecutionWindow(0, 30)
    .setTag(LogService.TAG_TASK_ONEOFF_LOG)
    .setUpdateCurrent(false)
    .setRequiredNetwork(Task.NETWORK_STATE_CONNECTED)
    .setRequiresCharging(false)
    .build();

By the way, since Android 3.0, if you do a network request on the main thread, you will get a NetworkOnMainThreadException. That will definitely warn you not to do that again.

5. Reflection

Reflection is the ability of classes and objects to examine their own constructors, fields, methods, and so on. It is used usually for backward compatibility, to check if a given method is available for a particular OS version. If you have to use reflection for that purpose, make sure to cache the response, as using reflection is pretty slow. Some widely used libraries use Reflection too, like Roboguice for dependency injection. That’s the reason why you should prefer Dagger 2. For more details about reflection, you can check a separate post.

6. Autoboxing

Autoboxing and unboxing are processes of converting a primitive type to an Object type, or vice versa. In practice it means converting an int to an Integer. For achieving that, the compiler uses the Integer.valueOf()function internally. Converting is not just slow, Objects also take a lot more memory than their primitive equivalents. Let’s look at some code.

Integer total = 0;
for (int i = 0; i < 1000000; i++) {
    total += i;
}

While this takes 500ms on average, rewriting it to avoid autoboxing will speed it up drastically.

int total = 0;
for (int i = 0; i < 1000000; i++) {
    total += i;
}

This solution runs at around 2ms, which is 25 times faster. If you don’t believe me, test it out. The numbers will be obviously different per device, but it should still be a lot faster. And it’s also a really simple step to optimize.

Okay, you probably don’t create a variable of type Integer like this often. But what about the cases when it is more difficult to avoid? Like in a map, where you have to use Objects, like Map<Integer, Integer>? Look at the solution many people use.

Map<Integer, Integer> myMap = new HashMap<>();
for (int i = 0; i < 100000; i++) {
    myMap.put(i, random.nextInt());
}

Inserting 100k random ints in the map takes around 250ms to run. Now look at the solution with SparseIntArray.

SparseIntArray myArray = new SparseIntArray();
for (int i = 0; i < 100000; i++) {
    myArray.put(i, random.nextInt());
}

This takes a lot less, roughly 50ms. It’s also one of the easier methods for improving performance, as nothing complicated has to be done, and the code also stays readable. While running a clear app with the first solution took 13MB of my memory, using primitive ints took something under 7MB, so just the half of it.

SparseIntArray is just one of the cool collections that can help you avoid autoboxing. A map like Map<Integer, Long> could be replaced by SparseLongArray, as the value of the map is of type Long. If you look at the source code of SparseLongArray, you will see something pretty interesting. Under the hood, it is basically just a pair of arrays. You can also use a SparseBooleanArray similarly.

If you read the source code, you might have noticed a note saying that SparseIntArray can be slower than HashMap. I’ve been experimenting a lot, but for me SparseIntArray was always better both memory and performance wise. I guess it’s still up to you which you choose, experiment with your use cases and see which fits you the most. Definitely have the SparseArrays in your head when using maps.

7. OnDraw

As I’ve said above, when you are optimizing performance, you will probably see the most benefit in optimizing code which runs often. One of the functions running a lot is onDraw(). It may not surprise you that it’s responsible for drawing views on the screen. As the devices usually run at 60 fps, the function is run 60 times per second. Every frame has 16 ms to be fully handled, including its preparation and drawing, so you should really avoid slow functions. Only the main thread can draw on the screen, so you should avoid doing expensive operations on it. If you freeze the main thread for several seconds, you might get the infamous Application Not Responding (ANR) dialog. For resizing images, database work, etc., use a background thread.

If you think your users won’t notice that drop in frame rate, you are wrong!

I’ve seen some people trying to shorten their code, thinking that it will be more efficient that way. That definitely isn’t the way to go, as shorter code totally doesn’t mean faster code. Under no circumstances should you measure the quality of code by the number of lines.

One of the things you should avoid in onDraw() is allocating objects like Paint. Prepare everything in the constructor, so it’s ready when drawing. Even if you have onDraw() optimized, you should call it only as often as you have to. What is better than calling an optimized function? Well, not calling any function at all. In case you want to draw text, there is a pretty neat helper function called drawText(), where you can specify things like the text, coordinates, and the text color.

8. ViewHolders

You probably know this one, but I cannot skip it. The Viewholder design pattern is a way of making scrolling lists smoother. It is a kind of view caching, which can seriously reduce the calls to findViewById() and inflating views by storing them. It can look something like this.

static class ViewHolder {
    TextView title;
    TextView text;

    public ViewHolder(View view) {
        title = (TextView) view.findViewById(R.id.title);
        text = (TextView) view.findViewById(R.id.text);
    }
}

Then, inside the getView() function of your adapter, you can check if you have a useable view. If not, you create one.

ViewHolder viewHolder;
if (convertView == null) {
    convertView = inflater.inflate(R.layout.list_item, viewGroup, false);
    viewHolder = new ViewHolder(convertView);
    convertView.setTag(viewHolder);
} else {
    viewHolder = (ViewHolder) convertView.getTag();
}

viewHolder.title.setText("Hello World");

You can find a lot of useable info about this pattern around the internet. It can also be used in cases when your list view has multiple different types of elements in it, like some section headers.

9. Resizing Images

Chances are, your app will contain some images. In case you are downloading some JPGs from the web, they can have really huge resolutions. However, the devices they will be displayed on will be a lot smaller. Even if you take a photo with the camera of your device, it needs to be downsized before displaying as the photo resolution is a lot bigger than the resolution of the display. Resizing images before displaying them is a crucial thing. If you’d try displaying them in full resolutions, you’d run out of memory pretty quickly. There is a whole lot written about displaying bitmaps efficiently in the Android docs, I will try summing it up.

So you have a bitmap, but you don’t know anything about it. There’s a useful flag of Bitmaps called inJustDecodeBounds at your service, which allows you to find out the bitmap’s resolution. Let’s assume that your bitmap is 1024×768, and the ImageView used for displaying it is just 400×300. You should keep dividing the bitmap’s resolution by 2 until it’s still bigger than the given ImageView. If you do, it will downsample the bitmap by a factor of 2, giving you a bitmap of 512×384. The downsampled bitmap uses 4x less memory, which will help you a lot with avoiding the famous OutOfMemory error.

Now that you know how to do it, you should not do it. … At least, not if your app relies on images heavily, and it’s not just 1-2 images. Definitely avoid stuff like resizing and recycling images manually, use some third party libraries for that. The most popular ones are Picasso by Square, Universal Image Loader, Fresco by Facebook, or my favourite, Glide. There is a huge active community of developers around it, so you can find a lot of helpful people at the issues section on GitHub as well.

10. Strict Mode

Strict Mode is a quite useful developer tool that many people don’t know about. It’s usually used for detecting network requests or disk accesses from the main thread. You can set what issues Strict Mode should look for and what penalty it should trigger. A google sample looks like this:

public void onCreate() {
    if (DEVELOPER_MODE) {
        StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder()
                .detectDiskReads()
                .detectDiskWrites()
                .detectNetwork()
                .penaltyLog()
                .build());
        StrictMode.setVmPolicy(new StrictMode.VmPolicy.Builder()
                .detectLeakedSqlLiteObjects()
                .detectLeakedClosableObjects()
                .penaltyLog()
                .penaltyDeath()
                .build());
    }
    super.onCreate();
}

If you want to detect every issue Strict Mode can find, you can also use detectAll(). As with many performance tips, you should not blindly try fixing everything Strict Mode reports. Just investigate it, and if you are sure it’s not an issue, leave it alone. Also make sure to use Strict Mode only for debugging, and always have it disabled on production builds.

Debugging Performance: The Pro Way

Let’s now see some tools that can help you find bottlenecks, or at least show that something is wrong.

1. Android Monitor

This is a tool built into Android Studio. By default, you can find the Android Monitor at the bottom left corner, and you can switch between 2 tabs there. Logcat and Monitors. The Monitors section contains 4 different graphs. Network, CPU, GPU, and Memory. They are pretty self explanatory, so I will just quickly go through them. Here is a screenshot of the graphs taken while parsing some JSON as it is downloaded.

Android Monitor

The Network part shows the incoming and outgoing traffic in KB/s. The CPU part displays the CPU usage in percent. The GPU monitor displays how much time it takes to render the frames of a UI window. This is the most detailed monitor out of these 4, so if you want more details about it, read this.

Lastly we have the Memory monitor, which you will probably use the most. By default it shows the current amount of Free and Allocated memory. You can force a Garbage Collection with it too, to test if the amount of used memory drops down. It has a useful feature called Dump Java Heap, which will create a HPROF file which can be opened with the HPROF Viewer and Analyzer. That will enable you to see how many objects you have allocated, how much memory is taken by what, and maybe which objects are causing memory leaks. Learning how to use this analyzer is not the simplest task out there, but it is worth it. The next thing you can do with the Memory Monitor is do some timed Allocation Tracking, which you can start and stop as you wish. It could be useful at many cases, for example when scrolling or rotating the device.

2. GPU Overdraw

This is a simple helper tool, which you can activate in Developer Options once you have enabled developer mode. Select Debug GPU overdraw, “Show overdraw areas”, and your screen will get some weird colors. It’s ok, that’s what is supposed to happen. The colors mean how many times a particular area was overdrawn. True color means that there was no overdraw, this is what you should aim for. Blue means one overdraw, green means two, pink three, red four.

GPU Overdraw

While seeing true color is the best, you will always see some overdraws, especially around texts, navigation drawers, dialogs and more. So don’t try getting rid of it fully. If your app is blueish or greenish, that’s probably fine. However, if you see too much red on some simple screens, you should investigate what’s going on. It might be too many fragments stacked onto each other, if you keep adding them instead of replacing. As I’ve mentioned above, drawing is the slowest part of apps, so there is no sense drawing something if there will be more than 3 layers drawn onto it. Feel free to check out your favourite apps with it. You will see that even apps with over a billion downloads have red areas, so just take it easy when you are trying to optimize.

3. GPU Rendering

This is another tool from the Developer options, called Profile GPU rendering. Upon selecting it, pick “On screen as bars”. You will notice some colored bars appearing on your screen. Since every application has separate bars, weirdly the status bar has its own ones, and in case you have software navigation buttons, they have their own bars too. Anyway, the bars get updated as you interact with the screen.

GPU Rendering

The bars consist of 3-4 colors, and according to the Android docs, their size indeed matters. The smaller, the better. At the bottom you have blue, which represents the time used to create and update the View’s display lists. If this part is too tall, it means that there is a lot of custom view drawing, or a lot of work done in the onDraw() functions. If you have Android 4.0+, you will see a purple bar above the blue one. This represents the time spent transferring resources to the render thread. Then comes the red part, which represents the time spent by Android’s 2D renderer issuing commands to OpenGL to draw and redraw display lists. At the top is the orange bar, which represents the time the CPU is waiting for the GPU to finish its work. If it’s too tall, the app is doing too much work on the GPU.

If you are good enough, there is one more color above the orange. It is a green line representing the 16 ms threshold. As your goal should be running your app at 60 fps, you have 16 ms to draw every frame. If you don’t make it, some frames might be skipped, the app could become jerky, and the user would definitely notice. Pay special attention to animations and scrolling, that’s where the smoothness matters the most. Even though you can detect some skipped frames with this tool, it won’t really help you figuring out where exactly the problem is.

4. Hierarchy Viewer

This is one of my favourite tools out there, as it’s really powerful. You can start it from Android Studio through Tools -> Android -> Android Device Monitor, or it is also in your sdk/tools folder as “monitor”. You can also find a standalone hierarachyviewer executable there, but as it’s deprecated you should open the monitor. However you open the Android Device Monitor, switch to the Hierarchy Viewer perspective. If you don’t see any running apps assigned to your device, there are a couple things you can do to fix it. Also try checking outthis issue thread, there are people with all kinds of issues and all kinds of solutions. Something should work for you too.

With Hierarchy Viewer, you can get a really neat overview of your view hierarchies (obviously). If you see every layout in a separate XML, you might easily spot useless views. However, if you keep combining the layouts, it can easily get confusing. A tool like this makes it simple to spot, for example, some RelativeLayout, which has just 1 child, another RelativeLayout. That makes one of them removable.

Avoid calling requestLayout(), as it causes traversing of the entire view hierarchy, to find out how big each view should be. If there is some conflict with the measurements, the hierarchy might be traversed multiple times, which if happens during some animation, it will definitely make some frames be skipped. If you want to find out more about how Android draws its views, you can read this. Let’s look at one view as seen in Hierarchy Viewer.

The top right corner contains a button for maximizing the preview of the particular view in a standalone window. Under it you can also see the actual preview of the view in the app. The next item is a number, which represents how many children the given view has, including the view itself. If you select a node (preferably the root one) and press “Obtain layout times” (3 colored circles), you will have 3 more values filled, together with colored circles appearing labelled measure, layout, and draw. It might not be shocking that the measure phase represents the time it took to measure the given view. The layout phase is about the rendering time, while the drawing is the actual drawing operation. These values and colors are relative to each other. Green one means that the view renders in the top 50% of all views in the tree. Yellow means rendering in the slower 50% of all views in the tree, red means that the given view is one of the slowest. As these values are relative, there will always be red ones. You simply cannot avoid them.

Under the values you have the class name, such as “TextView”, an internal view ID of the object, and the android:id of the view, which you set in the XML files. I urge you to build a habit of adding IDs to all views, even if you don’t reference them in the code. It will make identifying the views in Hierarchy Viewer really simple, and in case you have automated tests in your project, it will also make targeting the elements a lot faster. That will save some time for you and your colleagues writing them. Adding IDs to elements added in XML files is pretty straightforward. But what about the dynamically added elements? Well, it turns out to be really simple too. Just create an ids.xml file inside your values folder and type in the required fields. It can look like this:

<resources>
    <item name="item_title" type="id"/>
    <item name="item_body" type="id"/>
</resources>

Then in the code, you can use setId(R.id.item_title). It couldn’t be simpler.

There are a couple more things to pay attention to when optimizing UI. You should generally avoid deep hierarchies while preferring shallow, maybe wide ones. Do not use layouts you don’t need. For example, you can probably replace a group of nested LinearLayouts with either a RelativeLayout, or a TableLayout. Feel free to experiment with different layouts, don’t just always use LinearLayout and RelativeLayout. Also try creating some custom views when needed, it can improve the performance significantly if done well. For instance, did you know that Instagram doesn’t use TextViews for displaying comments?

You can find some more info about Hierarchy Viewer on the Android Developers site with descriptions of different panes, using the Pixel Perfect tool, etc. One more thing I would point out is capturing the views in a .psd file, which can be done by the “Capture the window layers” button. Every view will be in a separate layer, so it’s really simple to hide or change it in Photoshop or GIMP. Oh, that’s another reason to add an ID to every view you can. It will make the layers have names that actually make sense.

You will find a lot more debugging tools in Developer options, so I advise you to activate them and see what are they doing. What could possibly go wrong?

The Android developers site contains a set of best practices for performance. They cover a lot of different areas, including memory management, which I haven’t really talked about. I silently ignored it, because handling memory and tracking memory leaks is a whole separate story. Using a third party library for efficiently displaying images will help a lot, but if you still have memory issues, check out Leak canary made by Square, or read this.

Wrapping Up

So, this was the good news. The bad new is, optimizing Android apps is a lot more complicated. There are a lot of ways of doing everything, so you should be familiar with the pros and cons of them. There usually isn’t any silver bullet solution which has only benefits. Only by understanding what’s happening behind the scenes will you be able to pick the solution which is best for you. Just because your favorite developer says that something is good, it doesn’t necessarily mean that it’s the best solution for you. There are a lot more areas to discuss and more profiling tools which are more advanced, so we might get to them next time.

Make sure you learn from the top developers and top companies. You can find a couple hundred engineering blogs at this link. It’s obviously not just Android related stuff, so if you are interested only in Android, you have to filter the particular blog. I would highly recommend the blogs of Facebook and Instagram. Even though the Instagram UI on Android is questionable, their engineering blog has some really cool articles. For me it’s awesome that it’s so easy to see how things are done in companies which are handling hundreds of millions of users daily, so not reading their blogs seems crazy. The world is changing really fast, so if you aren’t constantly trying to improve, learn from others and use new tools, you will be left behind. As Mark Twain said, a person who doesn’t read has no advantage over one who can’t read.

This article was written by Tibor Kaputa, a Toptal Java developer.

Scaling Scala: How to Dockerize Using Kubernetes


Kubernetes is the new kid on the block, promising to help deploy applications into the cloud and scale them more quickly. Today, when developing for a microservices architecture, it’s pretty standard to choose Scala for creating API servers.

Microservices are replacing classic monolithic back-end servers with multiple independent services that communicate among themselves and have their own processes and resources.

If there is a Scala application in your plans and you want to scale it into a cloud, then you are at the right place. In this article I am going to show step-by-step how to take a generic Scala application and implement Kubernetes with Docker to launch multiple instances of the application. The final result will be a single application deployed as multiple instances, and load balanced by Kubernetes.

All of this will be implemented by simply importing the Kubernetes source kit in your Scala application. Please note, the kit hides a lot of complicated details related to installation and configuration, but it is small enough to be readable and easy to understand if you want to analyze what it does. For simplicity, we will deploy everything on your local machine. However, the same configuration is suitable for a real-world cloud deployment of Kubernetes.

Scale Your Scala Application with Kubernetes

Be smart and sleep tight, scale your Docker with Kubernetes.

What is Kubernetes?

Before going into the gory details of the implementation, let’s discuss what Kubernetes is and why it’s important.

You may have already heard of Docker. In a sense, it is a lightweight virtual machine.

Docker gives the advantage of deploying each server in an isolated environment, very similar to a stand-alone virtual machine, without the complexity of managing a full-fledged virtual machine.

For these reasons, it is already one of the more widely used tools for deploying applications in clouds. A Docker image is pretty easy and fast to build and duplicable, much easier than a traditional virtual machine like VMWare, VirtualBox, or XEN.

Kubernetes complements Docker, offering a complete environment for managing dockerized applications. By using Kubernetes, you can easily deploy, configure, orchestrate, manage, and monitor hundreds or even thousands of Docker applications.

Kubernetes is an open source tool developed by Google and has been adopted by many other vendors. Kubernetes is available natively on the Google cloud platform, but other vendors have adopted it for their OpenShift cloud services too. It can be found on Amazon AWS, Microsoft Azure, RedHat OpenShift, and even more cloud technologies. We can say it is well positioned to become a standard for deploying cloud applications.

Prerequisites

Now that we covered the basics, let’s check if you have all the prerequisite software installed. First of all, you need Docker. If you are using either Windows or Mac, you need the Docker Toolbox. If you are using Linux, you need to install the particular package provided by your distribution or simply follow the official directions.

We are going to code in Scala, which is a JVM language. You need, of course, the Java Development Kit and the scala SBT tool installed and available in the global path. If you are already a Scala programmer, chances are you have those tools already installed.

If you are using Windows or Mac, Docker will by default create a virtual machine named default with only 1GB of memory, which can be too small for running Kubernetes. In my experience, I had issues with the default settings. I recommend that you open the VirtualBox GUI, select your virtual machine default, and change the memory to at least to 2048MB.

VirtualBox memory settings

The Application to Clusterize

The instructions in this tutorial can apply to any Scala application or project. For this article to have some “meat” to work on, I chose an example used very often to demonstrate a simple REST microservice in Scala, called Akka HTTP. I recommend you try to apply source kit to the suggested example before attempting to use it on your application. I have tested the kit against the demo application, but I cannot guarantee that there will be no conflicts with your code.

So first, we start by cloning the demo application:

git clone https://github.com/theiterators/akka-http-microservice

Next, test if everything works correctly:

cd akka-http-microservice
sbt run

Then, access to http://localhost:9000/ip/8.8.8.8, and you should see something like in the following image:

Akka HTTP microservice is running

Adding the Source Kit

Now, we can add the source kit with some Git magic:

git remote add ScalaGoodies https://github.com/sciabarra/ScalaGoodies
git fetch --all
git merge ScalaGoodies/kubernetes

With that, you have the demo including the source kit, and you are ready to try. Or you can even copy and paste the code from there into your application.

Once you have merged or copied the files in your projects, you are ready to start.

Starting Kubernetes

Once you have downloaded the kit, we need to download the necessary kubectl binary, by running:

bin/install.sh

This installer is smart enough (hopefully) to download the correct kubectl binary for OSX, Linux, or Windows, depending on your system. Note, the installer worked on the systems I own. Please do report any issues, so that I can fix the kit.

Once you have installed the kubectl binary, you can start the whole Kubernetes in your local Docker. Just run:

bin/start-local-kube.sh

The first time it is run, this command will download the images of the whole Kubernetes stack, and a local registry needed to store your images. It can take some time, so please be patient. Also note, it needs direct accesses to the internet. If you are behind a proxy, it will be a problem as the kit does not support proxies. To solve it, you have to configure the tools like Docker, curl, and so on to use the proxy. It is complicated enough that I recommend getting a temporary unrestricted access.

Assuming you were able to download everything successfully, to check if Kubernetes is running fine, you can type the following command:

bin/kubectl get nodes

The expected answer is:

NAME        STATUS    AGE
127.0.0.1   Ready     2m

Note that age may vary, of course. Also, since starting Kubernetes can take some time, you may have to invoke the command a couple of times before you see the answer. If you do not get errors here, congratulations, you have Kubernetes up and running on your local machine.

Dockerizing Your Scala App

Now that you have Kubernetes up and running, you can deploy your application in it. In the old days, before Docker, you had to deploy an entire server for running your application. With Kubernetes, all you need to do to deploy your application is:

  • Create a Docker image.
  • Push it in a registry from where it can be launched.
  • Launch the instance with Kubernetes, that will take the image from the registry.

Luckily, it is way less complicated that it looks, especially if you are using the SBT build tool like many do.

In the kit, I included two files containing all the necessary definitions to create an image able to run Scala applications, or at least what is needed to run the Akka HTTP demo. I cannot guarantee that it will work with any other Scala applications, but it is a good starting point, and should work for many different configurations. The files to look for building the Docker image are:

docker.sbt
project/docker.sbt

Let’s have a look at what’s in them. The file project/docker.sbt contains the command to import the sbt-docker plugin:

addSbtPlugin("se.marcuslonnberg" % "sbt-docker" % "1.4.0")

This plugin manages the building of the Docker image with SBT for you. The Docker definition is in the docker.sbt file and looks like this:

imageNames in docker := Seq(ImageName("localhost:5000/akkahttp:latest"))

dockerfile in docker := {
  val jarFile: File = sbt.Keys.`package`.in(Compile, packageBin).value
  val classpath = (managedClasspath in Compile).value
  val mainclass = mainClass.in(Compile, packageBin).value.getOrElse(sys.error("Expected exactly one main class"))
  val jarTarget = s"/app/${jarFile.getName}"
  val classpathString = classpath.files.map("/app/" + _.getName)
    .mkString(":") + ":" + jarTarget
  new Dockerfile {
    from("anapsix/alpine-java:8")
    add(classpath.files, "/app/")
    add(jarFile, jarTarget)
    entryPoint("java", "-cp", classpathString, mainclass)
  }
}

To fully understand the meaning of this file, you need to know Docker well enough to understand this definition file. However, we are not going into the details of the Docker definition file, because you do not need to understand it thoroughly to build the image.

The beauty of using SBT for building the Docker image is that
the SBT will take care of collecting all the files for you.

Note the classpath is automatically generated by the following command:

val classpath = (managedClasspath in Compile).value

In general, it is pretty complicated to gather all the JAR files to run an application. Using SBT, the Docker file will be generated with add(classpath.files, "/app/"). This way, SBT collects all the JAR files for you and constructs a Dockerfile to run your application.

The other commands gather the missing pieces to create a Docker image. The image will be built using an existing image APT to run Java programs (anapsix/alpine-java:8, available on the internet in the Docker Hub). Other instructions are adding the other files to run your application. Finally, by specifying an entry point, we can run it. Note also that the name starts with localhost:5000 on purpose, because localhost:5000 is where I installed the registry in the start-kube-local.sh script.

Building the Docker Image with SBT

To build the Docker image, you can ignore all the details of the Dockerfile. You just need to type:

sbt dockerBuildAndPush

The sbt-docker plugin will then build a Docker image for you, downloading from the internet all the necessary pieces, and then it will push to a Docker registry that was started before, together with the Kubernetes application in localhost. So, all you need is to wait a little bit more to have your image cooked and ready.

Note, if you experience problems, the best thing to do is to reset everything to a known state by running the following commands:

bin/stop-kube-local.sh
bin/start-kube-local.sh

Those commands should stop all the containers and restart them correctly to get your registry ready to receive the image built and pushed by sbt.

Starting the Service in Kubernetes

Now that the application is packaged in a container and pushed in a registry, we are ready to use it. Kubernetes uses the command line and configuration files to manage the cluster. Since command lines can become very long, and also be able to replicate the steps, I am using the configurations files here. All the samples in the source kit are in the folder kube.

Our next step is to launch a single instance of the image. A running image is called, in the Kubernetes language, a pod. So let’s create a pod by invoking the following command:

bin/kubectl create -f kube/akkahttp-pod.yml

You can now inspect the situation with the command:

bin/kubectl get pods

You should see:

NAME                   READY     STATUS    RESTARTS   AGE
akkahttp               1/1       Running   0          33s
k8s-etcd-127.0.0.1     1/1       Running   0          7d
k8s-master-127.0.0.1   4/4       Running   0          7d
k8s-proxy-127.0.0.1    1/1       Running   0          7d

Status actually can be different, for example, “ContainerCreating”, it can take a few seconds before it becomes “Running”. Also, you can get another status like “Error” if, for example, you forget to create the image before.

You can also check if your pod is running with the command:

bin/kubectl logs akkahttp

You should see an output ending with something like this:

[DEBUG] [05/30/2016 12:19:53.133] [default-akka.actor.default-dispatcher-5] [akka://default/system/IO-TCP/selectors/$a/0] Successfully bound to /0:0:0:0:0:0:0:0:9000

Now you have the service up and running inside the container. However, the service is not yet reachable. This behavior is part of the design of Kubernetes. Your pod is running, but you have to expose it explicitly. Otherwise, the service is meant to be internal.

Creating a Service

Creating a service and checking the result is a matter of executing:

bin/kubectl create -f kube/akkahttp-service.yaml
bin/kubectl get svc

You should see something like this:

NAME               CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
akkahttp-service   10.0.0.54                  9000/TCP   44s
kubernetes         10.0.0.1     <none>        443/TCP    3m

Note that the port can be different. Kubernetes allocated a port for the service and started it. If you are using Linux, you can directly open the browser and type http://10.0.0.54:9000/ip/8.8.8.8 to see the result. If you are using Windows or Mac with Docker Toolbox, the IP is local to the virtual machine that is running Docker, and unfortunately it is still unreachable.

I want to stress here that this is not a problem of Kubernetes, rather it is a limitation of the Docker Toolbox, which in turn depends on the constraints imposed by virtual machines like VirtualBox, which act like a computer within another computer. To overcome this limitation, we need to create a tunnel. To make things easier, I included another script which opens a tunnel on an arbitrary port to reach any service we deployed. You can type the following command:

bin/forward-kube-local.sh akkahttp-service 9000

Note that the tunnel will not run in the background, you have to keep the terminal window open as long as you need it and close when you do not need the tunnel anymore. While the tunnel is running, you can open: http://localhost:9000/ip/8.8.8.8 and finally see the application running in Kubernetes.

Final Touch: Scale

So far we have “simply” put our application in Kubernetes. While it is an exciting achievement, it does not add too much value to our deployment. We’re saved from the effort of uploading and installing on a server and configuring a proxy server for it.

Where Kubernetes shines is in scaling. You can deploy two, ten, or one hundred instances of our application by only changing the number of replicas in the configuration file. So let’s do it.

We are going to stop the single pod and start a deployment instead. So let’s execute the following commands:

bin/kubectl delete -f kube/akkahttp-pod.yml
bin/kubectl create -f kube/akkahttp-deploy.yaml

Next, check the status. Again, you may try a couple of times because the deployment can take some time to be performed:

NAME                                   READY     STATUS    RESTARTS   AGE
akkahttp-deployment-4229989632-mjp6u   1/1       Running   0          16s
akkahttp-deployment-4229989632-s822x   1/1       Running   0          16s
k8s-etcd-127.0.0.1                     1/1       Running   0          6d
k8s-master-127.0.0.1                   4/4       Running   0          6d
k8s-proxy-127.0.0.1                    1/1       Running   0          6d

Now we have two pods, not one. This is because in the configuration file I provided, there is the value replica: 2, with two different names generated by the system. I am not going into the details of the configuration files, because the scope of the article is simply an introduction for Scala programmers to jump-start into Kubernetes.

Anyhow, there are now two pods active. What is interesting is that the service is the same as before. We configured the service to load balance between all the pods labeled akkahttp. This means we do not have to redeploy the service, but we can replace the single instance with a replicated one.

We can verify this by launching the proxy again (if you are on Windows and you have closed it):

bin/forward-kube-local.sh akkahttp-service 9000

Then, we can try to open two terminal windows and see the logs for each pod. For example, in the first type:

bin/kubectl logs -f akkahttp-deployment-4229989632-mjp6u

And in the second type:

bin/kubectl logs -f akkahttp-deployment-4229989632-s822x

Of course, edit the command line accordingly with the values you have in your system.

Now, try to access the service with two different browsers. You should expect to see the requests to be split between the multiple available servers, like in the following image:

Kubernets in action

Conclusion

Today we barely scratched the surface. Kubernetes offers a lot more possibilities, including automated scaling and restart, incremental deployments, and volumes. Furthermore, the application we used as an example is very simple, stateless with the various instances not needing to know each other. In the real world, distributed applications do need to know each other, and need to change configurations according to the availability of other servers. Indeed, Kubernetes offers a distributed keystore (etcd) to allow different applications to communicate with each other when new instances are deployed. However, this example is purposefully small enough and simplified to help you get going, focusing on the core functionalities. If you follow the tutorial, you should be able to get a working environment for your Scala application on your machine without being confused by a large number of details and getting lost in the complexity.

This article was written by Michele Sciabarra, a Toptal Scala developer.

Produce DWGs Like It’s 2016: Teigha For Architecture


In this article, I will introduce you to Teigha, a library that provides an alternative way of handling DWG files and ACA objects. We’ll write a small piece of code that creates a house out of ACA objects.

If you want to handle DWG files and AutoCAD objects programmatically, the only platform options areObjectARX and Teigha. All third-party components and applications that can read and write DWG files use Teigha as a base.

Produce DWGs Like It's 2016: Teigha For Architecture

Accelerate Your DWG Production With Teigha Architecture

Teigha Architecture is a set of libraries that enable you to read, write, and handle objects of the original AutoCAD and its derivatives like ACA. The libraries also provide many auxiliary mechanisms to help you handle AutoCAD objects and rendering devices to render the DWG database.

Teigha’s main features are:

  • Support DWG, DXF, BDXF, DGN file formats
  • Render drawing files using GDI, OpenGL, or DirectX with the ability to select entities.
  • Edit and manipulate CAD data programmatically, including:
  • Explode an entity into a set of simpler entities.
  • Apply a transformation to an entity.
  • Modify arbitrary properties of database objects.
  • Clone a database object.
  • Export to SVG, PDF, DWF, BMP, STL, DAE (Collada).
  • Import DWF/DAE/DGN files into a .dwg database.
  • Support custom objects — members can create custom objects that are usable within any Teigha host application (compatible with .dwg files only).
  • Support ACIS/Parasolid data internally, including rendering (wireframe and shaded) for embedded 3D solids and access to the underlying boundary representation data.
  • Implement custom commands.

Why Should Designers Consider Teigha?

AutoCAD saves its data to .dwg file format. DWG is a proprietary binary file format used for storing two and three-dimensional design data and metadata. DWG is an industrial standard. Thousands and thousands of DWG drawings exist and need to be supported. Besides AutoCAD, there is only one library which can load\save and manipulate objects stored as a DWG file: Teigha.

There are several reasons to use Teigha rather than AutoCAD ObjectARX:

  • Cost: If you develop an application that works with information stored as a .dwg file, you have two options: develop an AutoCAD plugin or develop your application based on Teigha. Developing AutoCAD plugin means that all of your customers have to own an AutoCAD license, which cost a fortune. Teigha pricing is very affordable.
  • Flexibility: Based on Teigha you can develop your own CAD application from scratch designed to fulfill your customer’s certain needs. You are not bound to AutoCAD core and GUI. You can develop your own particular GUI for your CAD application. Or, if you need a CAD-like GUI, you can use one of the Teigha-based cheap CAD applications available on the market ( like BricsCAD, ZWCad ) as a host to your plugin.
  • Many supported platforms: for example, you can build a standalone CAD application for Mobile platforms like IOS and Android, which is very demanded in certain areas.
  • Source code access: Teigha sources are available for founding members. If you need a specific functionality which is not provided or change existing functionality for your needs – you can buy Teigha sources and change anything you need.

Autodesk offers RealDWG – a software library that allows C++ and .NET developers to read and write AutoCAD® software DWG and DXF files. But again, it is much more expensive, has annual fees and provides only load\save functionality for DWG files, while Teigha provides rendering devices and other useful API for building your own CAD application.

Produce DWGs Like It's 2016: Teigha For Architecture

An Alternative to AutoCAD and ObjectARX

For starters, let’s load and render a DWG drawing. We can use the standard sample from Teigha’s distribution package:

Produce DWGs Like It's 2016: Teigha For Architecture

We’ve successfully loaded and rendered the DWG file. (By the way, the picture that you could see in the beginning of this article is the result of rendering a different file.) The standard sample is a windowed C++ application. It allows you to load, view, and edit DWG files without using AutoCAD. I should also mention that the API of AutoCAD objects is almost identical to the API of Teigha objects, so you can easily modify an existing ObjectARX plugin to support Teigha-based applications.

Most alternatives to AutoCAD, such as BricsCAD, ZWCad, and IntelliCAD, use Teigha to handle the DWG format and ACAD objects.

The Peculiarities of AutoCAD Architecture Objects

Now I will introduce you to architectural objects and how to handle them. AutoCAD Architecture handles special high-level objects intended for architectural design: walls, doors, windows, roofs, etc. The objects are viewport dependent, that is, they can be rendered differently depending on the camera direction. Objects are style based. A certain style is assigned to each object. If you change the style, all objects with that style will change. Every object consists of components, and every component has its own visual settings: color, line type, material, and scale. For example, a 3D door may consist of a frame, a door panel, and a sheet of glass. For different view modes, the object is rendered using different geometries, so the number and settings of components are different for different presentations.

Teigha for Architecture (TA)

To handle architectural objects, you can use ACA and its open API that enables you to create plugins. Alternatively, you can use Teigha for Architecture, a library developed by the Open Design Alliance.

Teigha Architecture is a С++ class library that implements all basic primitives of ACA, such as walls, windows, doors, roofs, beams, openings, etc. The library allows you to load these objects from any version of the DWG format and write (convert) to the latest DWG version. TA can render any primitives in different views and configurations. ACA objects interact with each other, so TA also supports ACA’s auxiliary classes and mechanisms, such as anchor, display manager, property sets, relation graph, etc.

Starting with Teigha Architecture API

I’ve already described Teigha’s main features. Let’s take a closer look at Teigha and write our first command, a very simple one. I’ll be using Visual Studio 2005, which is obviously outdated, but the Teigha libraries are multi-platform, and the distribution package includes a solution generator for all Visual Studio versions up to 2015. Depending on your license type, you will have access to the complete code of the whole library, or only to the pre-built binary files and header files.

The set of TA libraries looks like this:

Produce DWGs Like It's 2016: Teigha For Architecture

Basically, these are regular Windows DLL files (you can also build them for other platforms: iOS, Linux, UNIX, etc.). You can find LIB files for them in a separate folder. In addition to TA, we will need the Teigha Core libraries, because TA is an extension on top of the Core objects. Core implements the main mechanisms and objects of the original AutoCAD.

Initializing Тeigha Architecture

To initialize the library, we need a class that performs platform-specific operations on files.

class MyServices : public ExSystemServices, public ExHostAppServices
{
protected:
  ODRX_USING_HEAP_OPERATORS(ExSystemServices);
};

The distribution package includes two ready-made extensions for Windows, ExSystemServices and ExHostAppServices, which we can use in this case. Then we need to initialize the library and the graphics subsystem:

OdStaticRxObject<MyServices> svcs;
odInitialize( &svcs );
odgsInitialize();

_OdStaticRxObject_ adds the _addRef/Release_ logic to an object. The library saves the reference to the MyServices object and uses that object for platform-specific operations.

Let’s initialize the ТА libraries:

  // Loading of all public Teigha Architecture DRX modules.
        // Note that not all calls are necessary for some of them depend on others
        // but here we list all of them.
        //
        // If a program uses TD doesn't modify or create binary files
        // it may not load any of DRX modules on start because they will be loaded automatically. 
        // But if a program modifies or creates binary files then it is highly recommended
        // to load all DRX modules program uses.
        ::odrxDynamicLinker()->loadApp( OD_T("AecBase") );
        ::odrxDynamicLinker()->loadApp( OD_T("AecArchBase") );
        ::odrxDynamicLinker()->loadApp( OD_T("AecArchDACHBase") );
        ::odrxDynamicLinker()->loadApp( OD_T("AecScheduleData") );
        ::odrxDynamicLinker()->loadApp( OD_T("AecSchedule") );
        ::odrxDynamicLinker()->loadApp( OD_T("AecStructureBase") );

AecBase, AecArchBase, etc. are the TX modules (that is, DLL libraries) shown in the screenshot above. They have already been linked using LIB files, but that is not enough. We also need to initialize them as modules. What does it mean? At runtime, the memory contains a dictionary of loaded classes. The dictionary allows you to cast references between different types of TA objects and create instances of TA classes by using a centralized pseudo-constructor mechanism.

For example, when the command ::odrxDynamicLinker()->loadApp( OD_T("AecArchBase") ) is being executed, the function AECArchBase::initApp() will be called in the framework. Basically, initApp() will register all classes of the library in the global dictionary by calling the static function rxInit() for each of them:

…
AECDbSpaceBoundary::rxInit();
AECDbStair::rxInit();
AECDbWall::rxInit();
AECDbZone::rxInit();
…

After that, the object creation mechanism will be available. We will be able to create, for example, a wall by calling _AECDbWallPtr pWall = AECDbWall::CreateAECObject()_. Otherwise, an exception will be thrown if we try to create an object of the TA class.

Let’s create an empty DWG database by calling


OdDbDatabasePtr pDatabase = svcs.createDatabase();

This database is a central object. It is an object database that is saved to and loaded from a DWG file. We are going to add all architectural objects that we create to that database. When we are finished, we will save the database to a DWG file by calling:


OdWrFileBuf cBuffer( strFilename );
pDatabase->writeFile( &cBuffer, OdDb::kDwg, OdDb::kDHL_CURRENT );

Now let’s initialize some more loaded libraries and the display manager:


AECArchDACHBaseDatabase( pDatabase ).Init();
AECScheduleDatabase( pDatabase ).Init();
AECStructureBaseDatabase( pDatabase ).Init();
  
init_display_system( pDatabase );

An AEC dictionary is created in the database. That dictionary contains the default measurement units for length, area, volume, and angle, as well as print settings. Display representations implemented in the modules are registered.

The initialization is complete. If you have skipped some steps, the result will depend on which step you skipped: Objects will not be created or will not be rendered (you will see an empty screen), or there may be other glitches.

So far, the complete code looks as follows:


class MyServices : public ExSystemServices, public ExHostAppServices
{
protected:
  ODRX_USING_HEAP_OPERATORS(ExSystemServices);
};

int wmain(int argc, wchar_t* argv[])
{ 
  // Initialize TD with system services.
  // And create single instance of hostapp services
  // for TD database creation.
  OdStaticRxObject<MyServices> svcs;
  odInitialize( &svcs );
  odgsInitialize();


  // Loading of all public Teigha Architecture DRX modules.
  // Note that not all calls are necessary for some of them depend on others
  // but here we list all of them.
  //
  // If a program uses TD doesn't modify or create binary files
  // it may not load any of DRX modules on start because they will be loaded automatically. 
  // But if a program modifies or creates binary files then it is highly recommended
  // to load all DRX modules program uses.
  ::odrxDynamicLinker()->loadApp( OD_T("AecBase") );
  ::odrxDynamicLinker()->loadApp( OD_T("AecArchBase") );
  ::odrxDynamicLinker()->loadApp( OD_T("AecArchDACHBase") );
  ::odrxDynamicLinker()->loadApp( OD_T("AecScheduleData") );
  ::odrxDynamicLinker()->loadApp( OD_T("AecSchedule") );
  ::odrxDynamicLinker()->loadApp( OD_T("AecStructureBase") );

  // Create empty TD database.
  OdDbDatabasePtr pDatabase = svcs.createDatabase();;
  
  // Initialize database with default Teigha Architecture content.
  AECArchDACHBaseDatabase( pDatabase ).Init();
  AECScheduleDatabase( pDatabase ).Init();
  AECStructureBaseDatabase( pDatabase ).Init();


  init_display_system( pDatabase );

    
  // do something here with TA objects


  // Perform "zoom extents" on model space.
  {
    OdDbViewportTablePtr pVT =
      pDatabase->getViewportTableId().openObject( OdDb::kForRead );
    OdDbViewportTableRecordPtr pV =
      pVT->getActiveViewportId().openObject( OdDb::kForWrite );
    pV->zoomExtents();
  }

  OdWrFileBuf cBuffer( "H:\\TA_test.dwg" );
  pDatabase->writeFile( &cBuffer, OdDb::kDwg, OdDb::kDHL_CURRENT );
  
  odgsUninitialize();
  odUninitialize();

  return 0;
}

I’ve added the “zoom extents” command, so that when we open the created file, we can instantly see the objects added to it and the symmetrical deinitialization of the library. To make things simpler, I removed error checking and the try/catch constructs around the main actions.

Now the program will create an empty DWG file, which we can open and view in AutoCAD.

Handling objects

To show you how to work with the TA classes, I’m going to create a house consisting of a floor/foundation, walls, windows, a door, and a roof. Let’s begin with the walls.

For starters, we’ll add one wall to our drawing. To create the wall, we need to create a style for it first. Let’s write the add_wall_style function:


OdDbObjectId add_wall_style( OdDbDatabasePtr pDatabase )
{
  OdDbObjectId idResult =
    AECDbWallStyle::CreateAECObject( pDatabase, OD_T("Wall Style Created By Teigha(R) Architecture") );

  AECDbWallStylePtr pWallStyle =
    idResult.openObject( OdDb::kForWrite );

  pWallStyle->SetDescription( OD_T("Wall Style Description") );
  pWallStyle->SetDictRecordDescription( OD_T("Dialog caption") );

  pWallStyle->SetWallWidth( 4 );
  pWallStyle->SetWallWidthUsed( true );

  pWallStyle->SetBaseHeight( 110 );
  pWallStyle->SetBaseHeightUsed( true );

  pWallStyle->SetJustification( AECDefs::ewjLeft );
  pWallStyle->SetJustificationUsed( true );

  pWallStyle->SetAutomaticCleanups( true );
  pWallStyle->SetAutomaticCleanupsUsed( true );

  pWallStyle->SetCleanupRadius( 4 );
  pWallStyle->SetCleanupRadiusUsed( true );

  pWallStyle->SetFloorLineOffset( 3 );
  pWallStyle->SetFloorLineOffsetUsed( false );

  pWallStyle->SetRoofLineOffset( -3 );
  pWallStyle->SetRoofLineOffsetUsed( false );

  AECDisplayManager cDM( pDatabase );
  AECDbDispPropsWallModelPtr pOverrideModel =
    AECDbDispPropsWallModel::cast( pWallStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepWallModel::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverrideModel.isNull() )
  {
    pOverrideModel->SetIsDisplayOpeningEndcaps( false );
    pOverrideModel->GetBoundaryCompByIndex( 0 )->SetColor( colorAt( 4 ) );
  }

  AECDbDispPropsWallPtr pOverridePlan =
    AECDbDispPropsWall::cast( pWallStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepWallPlan::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverridePlan.isNull() )
  {
    pOverridePlan->GetBoundaryCompByIndex( 0 )->SetColor( colorAt( 4 ) );
  }

  return( pWallStyle->objectId() );
}

The function creates the AECDbWallStyle object and sets some of its parameters. Then it calls the display manager and changes the colors for the plan display representation (2D top view) and the model display representation (3D view).


AECDisplayManager cDM( pDatabase );
  AECDbDispPropsWallModelPtr pOverrideModel =
    AECDbDispPropsWallModel::cast( pWallStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepWallModel::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverrideModel.isNull() )
  {
    pOverrideModel->SetIsDisplayOpeningEndcaps( false );
    pOverrideModel->GetBoundaryCompByIndex( 0 )->SetColor( colorAt( 2 ) );
  }

In this case, we’ve set the yellow color for the wall in the 3D view. It looks complicated, but there is a reason for that: The display representations and display manager mechanism in ACA work this way. The mechanism is flexible and has many capabilities, but its logic is not obvious and takes some learning on your part.

OdDbObjectId: The Runtime Reference

Database objects reference other database objects using ObjectId objects, and a database object pointer can always be obtained from a valid ObjectId objects. The effect of this mechanism is that database objects do not have to reside in memory unless they are explicitly being examined or modified by the user.

The user must explicitly open an object before reading or writing to it, and should release it when the operation is completed. This functionality allows Teigha to support partial loading of a database, where ObjectId objects exist for all objects in the database, but the actual database objects need not be loaded until they are accessed. It also allows database objects that are not in use to be swapped out of memory, and loaded back in when they are accessed.

If you need to preserve references to objects between program launches, use OdDbHandle.

For example, the add_wall_style function has returned idWallStyle. In this case, the style has just been created explicitly by calling AECDbWallStyle::CreateAECObject(), and idWallStyle contains the pointer to the actual object in the memory. To get write access to the style object, we need to perform this operation:


AECDbWallStylePtr pWallStyle = idResult.openObject( OdDb::kForWrite );

As a result, ‘openObject()’ will return the actual pointer to the object, which we can use.

Instead of the regular С++ pointers, the library uses the OdSmartPtr smart pointers:


typedef OdSmartPtr<AECDbWallStyle> AECDbWallStylePtr

The smart pointer’s destructor will notify the framework if the object has been closed. As a result, the related objects may be recalculated, notifications sent, etc.

Now we can add the wall by this call:


OdDbObjectId idWall1 = add_wall( pDatabase, idWallStyle, OdGePoint2d( 0,     0 ), OdGePoint2d(   0, 110 ) );

The add_wall Listing

OdDbObjectId add_wall( OdDbDatabasePtr pDatabase, const OdDbObjectId& idStyle,
                      const OdGePoint2d& ptStart, const OdGePoint2d& ptEnd, double dBulge = 0 )
{
  AECDbWallPtr pWall =
    AECDbWall::CreateAECObject( pDatabase->getModelSpaceId(), idStyle );

  pWall->Set( ptStart, ptEnd, dBulge );
  pWall->SetDescription( OD_T("A Wall") );

  return( pWall->objectId() );
}

As you can see, add_wall doesn’t do anything special. It simply creates the AECDbWall object with the style that we created earlier. The AECDbWall object is added to the model space of the database. In simple terms, the model space is a special dictionary that contains all objects to be rendered when we render the database.

Then the start point, the end point, and curvature are defined for the wall. Note that a wall doesn’t have to be flat. If you wish, you can make it convex.

If we did everything right, we will get a DWG file with one yellow rectangular wall. I’m using the code sample from the Teigha distribution package to view the file, but it will be rendered in exactly the same way in ACA.

Actually, I’ve manually rotated the camera in the 3D view. By default, you will have the top view.

Now let’s add 4 walls, including one convex wall:


OdDbObjectId idWall1 = add_wall( pDatabase, idWallStyle,
    OdGePoint2d( 0,     0 ), OdGePoint2d(   0, 110 ) );
  OdDbObjectId idWall2 = add_wall( pDatabase, idWallStyle,
    OdGePoint2d( 0,   110 ), OdGePoint2d( 110, 110 ) );
  OdDbObjectId idWall3 = add_wall( pDatabase, idWallStyle,
    OdGePoint2d( 110, 110 ), OdGePoint2d( 110,   0 ) );
  OdDbObjectId idWall4 = add_wall( pDatabase, idWallStyle,
    OdGePoint2d( 110,   0 ), OdGePoint2d(   0,   0 ), -1 );

We’ve got the basic structure of our house:

As you can see, the walls were not simply rendered as separate objects. Instead, smooth junctions were added automatically. It is one of TA’s automatic functions known as “cleanup.”

Adding windows

Now let’s add windows to our house. We can handle windows just like doors: We need to create a style for the windows that we want to add to the drawing, and then add the window objects with that style.


OdDbObjectId idWindowStyle =  add_window_style( pDatabase );

OdDbObjectId add_window_style( OdDbDatabasePtr pDatabase )
{
  OdDbObjectId idWStyle =
    AECDbWindowStyle::CreateAECObject( pDatabase, OD_T("Window Style Created By Teigha(R) Architecture") );

  AECDbWindowStylePtr pWindowStyle = idWStyle.openObject( OdDb::kForWrite );

  pWindowStyle->SetDescription( OD_T("Window Style Description") );
  pWindowStyle->SetDictRecordDescription( OD_T("Dialog caption") );
  pWindowStyle->SetAutoAdjustToWidthOfWall( true );
  pWindowStyle->SetFrameWidth( 2 );
  pWindowStyle->SetFrameDepth( 5 );
  pWindowStyle->SetSashWidth( 2 );
  pWindowStyle->SetSashDepth( 3 );
  pWindowStyle->SetGlassThickness( 1 );
  pWindowStyle->SetWindowType( AECDefs::ewtGlider );
  pWindowStyle->SetWindowShape( AECDefs::esRectangular );

  AECDisplayManager cDM( pDatabase );
  AECDbDispPropsWindowPtr pOverrideModel =
    AECDbDispPropsWindow::cast( pWindowStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepWindowModel::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverrideModel.isNull() )
  {
    pOverrideModel->GetFrameComp()->SetColor( colorAt( 1 ) );
    pOverrideModel->GetSashComp()->SetColor( colorAt( 2 ) );
    pOverrideModel->GetGlassComp()->SetColor( colorAt( 3 ) );
  }

  AECDbDispPropsWindowPtr pOverridePlan =
    AECDbDispPropsWindow::cast( pWindowStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepWindowPlan::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverridePlan.isNull() )
  {
    pOverridePlan->GetFrameComp()->SetColor( colorAt( 1 ) );
    pOverridePlan->GetSashComp()->SetColor( colorAt( 2 ) );
    pOverridePlan->GetGlassComp()->SetColor( colorAt( 3 ) );
  }

  return( pWindowStyle->objectId() );
}

As you can see in the code, the AECDbWindowStyle object is created and added to the database. Then some settings are set for the style (though we could have used the default ones). After that, colors for a few components are redefined for the 2D view and the 3D view. In this case, the components are physical parts of a window: a frame, a sash, and a sheet of glass.

Let’s add a window to the first wall by calling the add_window function:


OdDbObjectId idWindow01 = add_window( pDatabase, idWindowStyle, idWall1, 10, 10 );

// Inserts a window into a database using the specified window style.
// If idWall parameter is not null it also attaches the window to the wall.
// Returns Object ID of newly created window.
OdDbObjectId add_window( OdDbDatabasePtr pDatabase, const OdDbObjectId& idStyle, const OdDbObjectId& idWall,
                        double dOffsetAlongX, double dOffsetAlongZ )
{
  AECDbWindowPtr pWindow = AECDbWindow::CreateAECObject( pDatabase->getModelSpaceId(), idStyle );

  pWindow->SetRise( 10 );
  pWindow->SetWidth( 40 );
  pWindow->SetHeight( 40 );

  pWindow->SetOpenPercent( 60 );
  pWindow->SetMeasureTo( AECDefs::eomtOutsideFrame );
  pWindow->SetLeaf( 10 );

  if ( !idWall.isNull() )
  {
    pWindow->AttachWallAnchor( idWall );

    AECDbAnchorEntToCurvePtr pAnchor = pWindow->GetAnchor().openObject( OdDb::kForWrite );
    pAnchor->GetXParams()->SetOffset( dOffsetAlongX );       
    pAnchor->GetZParams()->SetOffset( dOffsetAlongZ );
  }

  return( pWindow->objectId() );
}

The add_window() function is similar to add_wall(), but there is a difference: It uses the anchor object.

We create the AECDbWindow object and add it to the model space of the database. Then we set some settings for that specific instance of AECDbWindow. After that, we put the window into the wall. A special object derived from AECDbAnchorEntToCurve attaches the window to the wall.

That object contains the offsets along the X, Y, and Z axes from the origin of the coordinate system of the wall to the origin of the coordinate system of the window. When we call AttachWallAnchor(), an instance of that object is created and added to the database. The wall itself doesn’t know if it has any windows. Creating an anchor involves another basic mechanism, the relation graph, which contains relations between objects: Who is attached to whom, who includes whom, and who owns whom. If you modify the wall, the relation graph will be notified that the AECDbWall object has changed. It will check all relations and trigger the updating of the related objects. In this case, AECDbWindow will be updated. For example, if you move the wall, the windows in it will move automatically, because they will be notified by the relation graph. You can get access to the relation graph and request relations for a specific object. Actually, a window knows to which object it is attached, because each window contains a reference to the anchor created.

Let’s take a look at the result:

I’ve changed the color of the walls so that you can see the window more clearly. (The code creates blue walls, I just picked colors while writing this article.) TA contains lots of predefined window styles and types, which you can use through enumeration:


enum WindowType
    {
        ewtPicture          =  1,
        ewtSingleHung       =  2,
        ewtDoubleHung       =  3,
        ewtAwningTransom    =  4,
        ewtDoubleCasement   =  5,
        ewtGlider           =  6,
        ewtHopperTransom    =  7,
        ewtPassThrough      =  8,
        ewtSingleCasement   =  9,
        ewtSingleHopper     = 10,
        ewtSingleAwning     = 11,
        ewtVerticalPivot    = 12,
        ewtHorizontalPivot  = 13,
        ewtUnevenSingleHung = 14,
        ewtUnevenDoubleHung = 15
    };
enum Shape
    {
        esRectangular       =  0,
        esRound             =  1,
        esHalfRound         =  2,
        esQuarterRound      =  3,
        esOval              =  4,
        esArch              =  5,
        esTrapezoid         =  6,
        esGothic            =  7,
        esIsoscelesTriangle =  8,
        esRightTriangle     =  9,
        esPeakPentagon      = 10,
        esOctagon           = 11,
        esHexagon           = 12,
        esCustom            = 13
    };

I’ve selected AECDefs::ewtGlider and AECDefs::esRectangular. As you can see, a lot of other shapes are available, too. By using these and other settings, you can create a very complex window type, with multiple sashes and an internal pattern on the glass sheets. The best thing is that you don’t have to create it piece by piece manually or implement everything programmatically. All you need to do is set a few parameters for the existing object or style.

Actually, all Teigha Architecture objects are pretty complex and have a lot of settings. Thanks to that, Teigha for Architecture provides massive opportunities “out of the box.”

Let’s add windows to all flat walls:


OdDbObjectId idWindow01 = add_window( pDatabase, idWindowStyle, idWall1, 10, 10 );
  OdDbObjectId idWindow02 = add_window( pDatabase, idWindowStyle, idWall1, 60, 10 );
  OdDbObjectId idWindow03 = add_window( pDatabase, idWindowStyle, idWall1, 10, 60 );
  OdDbObjectId idWindow04 = add_window( pDatabase, idWindowStyle, idWall1, 60, 60 );

  OdDbObjectId idWindow05 = add_window( pDatabase, idWindowStyle, idWall2, 10, 10 );
  OdDbObjectId idWindow06 = add_window( pDatabase, idWindowStyle, idWall2, 60, 10 );
  OdDbObjectId idWindow07 = add_window( pDatabase, idWindowStyle, idWall2, 10, 60 );
  OdDbObjectId idWindow08 = add_window( pDatabase, idWindowStyle, idWall2, 60, 60 );

  OdDbObjectId idWindow09 = add_window( pDatabase, idWindowStyle, idWall3, 10, 10 );
  OdDbObjectId idWindow10 = add_window( pDatabase, idWindowStyle, idWall3, 60, 10 );
  OdDbObjectId idWindow11 = add_window( pDatabase, idWindowStyle, idWall3, 10, 60 );
  OdDbObjectId idWindow12 = add_window( pDatabase, idWindowStyle, idWall3, 60, 60 );

I didn’t bother to complete the code, but you can handle each window separately: change its opening percentage, color, etc. If you change the style, the change will be applied to all windows with that style at once.

Adding doors to the drawing

To complete the picture, let’s add a door. First, we’ll create a 2D profile for the door panel (a door leaf with a window hole). Then we’ll create a style with that profile. Finally, we’ll be able to create door objects with that style. Alternatively, we could use the default styles. Just like windows (or any other openings), doors are attached to walls with an anchor. The add_profile_def, add_door_style, and add_door listing.


// Inserts profile definition into a database.
// Returns Object ID of newly created profile definition.
OdDbObjectId add_profile_def( OdDbDatabasePtr pDatabase )
{
  OdDbObjectId idProfDef =
    AECDbProfileDef::CreateAECObject( pDatabase, OD_T("Profile Definition Created By Teigha(R) Architecture") );

  AECDbProfileDefPtr pProfileDefinition = idProfDef.openObject( OdDb::kForWrite );

  AECGe::Profile2D cProfile;
  cProfile.resize( 2 );

  cProfile[ 0 ].appendVertex( OdGePoint2d( 0, 0 ) );
  cProfile[ 0 ].appendVertex( OdGePoint2d( 1, 0 ) );
  cProfile[ 0 ].appendVertex( OdGePoint2d( 1, 1 ) );
  cProfile[ 0 ].appendVertex( OdGePoint2d( 0, 1 ) );
  cProfile[ 0 ].setClosed();

  // Forces the contour to be counter-clockwise.
  // So if the contour is already ccw this call is not needed.
  cProfile[ 0 ].makeCCW();

  cProfile[ 1 ].appendVertex( OdGePoint2d( 0.2, 0.2 ) );
  cProfile[ 1 ].appendVertex( OdGePoint2d( 0.2, 0.8 ) );
  cProfile[ 1 ].appendVertex( OdGePoint2d( 0.8, 0.8 ) );
  cProfile[ 1 ].appendVertex( OdGePoint2d( 0.8, 0.2 ) );
  cProfile[ 1 ].setClosed();

  cProfile[ 1 ].makeCCW( false );

  pProfileDefinition->GetProfile()->Init( cProfile );

  return( pProfileDefinition->objectId() );
}

// Inserts a door style into a database.
// Returns Object ID of newly created door style.
OdDbObjectId add_door_style( OdDbDatabasePtr pDatabase, const OdDbObjectId& idProfile )
{
  OdDbObjectId idDoorStyle =
    AECDbDoorStyle::CreateAECObject( pDatabase, OD_T("Door Style Created By Teigha(R) Architecture") );
  AECDbDoorStylePtr pDoorStyle = idDoorStyle.openObject( OdDb::kForWrite );

  pDoorStyle->SetDescription( OD_T("Door Style Description") );
  pDoorStyle->SetDictRecordDescription( OD_T("Dialog caption") );
  pDoorStyle->SetAutoAdjustToWidthOfWall( true );
  pDoorStyle->SetFrameWidth( 2 );
  pDoorStyle->SetFrameDepth( 5 );
  pDoorStyle->SetStopWidth( 2 );
  pDoorStyle->SetStopDepth( 3 );
  pDoorStyle->SetShapeAndType( AECDefs::esCustom, AECDefs::edtSingle );
  pDoorStyle->SetProfile( idProfile );
  pDoorStyle->SetGlassThickness( 1 );

  AECDisplayManager cDM( pDatabase );
  AECDbDispPropsDoorPtr pOverrideModel =
    AECDbDispPropsDoor::cast( pDoorStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepDoorModel::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverrideModel.isNull() )
  {
    pOverrideModel->GetPanelComp()->SetColor( colorAt( 1 ) );
    pOverrideModel->GetFrameComp()->SetColor( colorAt( 2 ) );
    pOverrideModel->GetStopComp()->SetColor( colorAt( 3 ) );
    pOverrideModel->GetSwingComp()->SetColor( colorAt( 4 ) );
    pOverrideModel->GetGlassComp()->SetColor( colorAt( 5 ) );
  }

  AECDbDispPropsDoorPtr pOverridePlan =
    AECDbDispPropsDoor::cast( pDoorStyle->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepDoorPlan::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverridePlan.isNull() )
  {
    pOverridePlan->GetPanelComp()->SetColor( colorAt( 1 ) );
    pOverridePlan->GetFrameComp()->SetColor( colorAt( 2 ) );
    pOverridePlan->GetStopComp()->SetColor( colorAt( 3 ) );
    pOverridePlan->GetSwingComp()->SetColor( colorAt( 4 ) );
    pOverridePlan->GetDirectionComp()->SetColor( colorAt( 5 ) );
  }

  return( pDoorStyle->objectId() );
}


// Inserts a door into a database using the specified door style.
// If idWall parameter is not null it also attaches the door to the wall.
// Returns Object ID of newly created door.
OdDbObjectId add_door( OdDbDatabasePtr pDatabase, const OdDbObjectId& idStyle, const OdDbObjectId& idWall,
                      double dOffsetAlongX, double dOffsetAlongZ )
{
  AECDbDoorPtr pDoor = AECDbDoor::CreateAECObject( pDatabase->getModelSpaceId(), idStyle );

  pDoor->SetRise( 10 );
  pDoor->SetWidth( 40 );
  pDoor->SetHeight( 50 );

  pDoor->SetOpenPercent( 20 );
  pDoor->SetMeasureTo( AECDefs::eomtOutsideFrame );
  pDoor->SetLeaf( 10 );

  if ( !idWall.isNull() )
  {
    pDoor->AttachWallAnchor( idWall );

    AECDbAnchorEntToCurvePtr pAnchor = pDoor->GetAnchor().openObject( OdDb::kForWrite );
    pAnchor->GetXParams()->SetOffset( dOffsetAlongX );
    pAnchor->GetZParams()->SetOffset( dOffsetAlongZ );
  }

  return( pDoor->objectId() );
}

Let’s add the following code to main:


AECDbWallPtr pWall = idWall4.openObject( OdDb::kForRead );
  double dLength = pWall->GetLength();
  double dOWidth = 40;
  double dL1 = 10;
  double dL3 = dLength - dOWidth - 10;
  double dL2 = dL1 + dOWidth + (dL3 - (dL1 + 2 * dOWidth)) / 2;

  OdDbObjectId idDoor     = add_door   ( pDatabase, idDoorStyle,   idWall4, dL2, 0  );

There is a difference, though: We open a wall with the read access and get its length in order to calculate the offset.

As a result, we’ve put a door in the convex wall:

Let’s also add windows to the convex wall:


OdDbObjectId idWindow13 = add_window ( pDatabase, idWindowStyle, idWall4, dL1, 10 );
  OdDbObjectId idWindow14 = add_window ( pDatabase, idWindowStyle, idWall4, dL3, 10 );
  OdDbObjectId idWindow15 = add_window ( pDatabase, idWindowStyle, idWall4, dL1, 60 );
  OdDbObjectId idWindow16 = add_window ( pDatabase, idWindowStyle, idWall4, dL2, 60 );
  OdDbObjectId idOpening  = add_window ( pDatabase, idWindowStyle, idWall4, dL3, 60 );

The result is a house without any roof or floor:

Let’s write the ‘add_roof()’ function.


void add_roof( OdDbDatabasePtr pDatabase )
{
  AECGe::Profile2D cProfile;
  cProfile.resize( 1 );
  cProfile.front().appendVertex( OdGePoint2d( 0,   0   )    );
  cProfile.front().appendVertex( OdGePoint2d( 0, 110   )    );
  cProfile.front().appendVertex( OdGePoint2d( 110, 110   )    );
  cProfile.front().appendVertex( OdGePoint2d( 110, 0 ), -1   );
  cProfile.front().setClosed();
  cProfile.front().makeCCW();
  
  AECDbRoofPtr pRoof =
    AECDbRoof::CreateAECObject( pDatabase->getModelSpaceId() );

  // Initialize roof profile.
  // By default all edges of Roof Profile have single slope of 45 degrees.
  pRoof->GetProfile()->Init( cProfile );

  pRoof->SetThickness( 2 );

  //// Manually modify Roof Segments.
  AECGeRingSubPtr pRoofLoop = pRoof->GetProfile()->GetRingByIndex( 0 );
  if ( !pRoofLoop.isNull() )
  {
    OdUInt32 i, iSize = pRoofLoop->GetSegmentCount();
    for ( i = 0; i < iSize; i++ )
    {
      AECGeRoofSegmentSubPtr pSeg = pRoofLoop->GetSegments()->GetAt( i );
      pSeg->SetFaceCount(1);
      pSeg->SetFaceHeightByIndex(0, 110);
      pSeg->SetBaseHeight(0);
      pSeg->SetOverhang(10.0);
      pSeg->SetFaceSlopeByIndex(0, OdaPI4);
      pSeg->SetSegmentCount(10);
    }
  }

  pRoof->setColorIndex( 3 );  
}

The roof is created based on a 2D profile whose direction of traversal is counterclockwise. Calling makeCCW()changes the direction of traversal if it was clockwise. It is important to do because the algorithm expects a profile whose direction of traversal is counterclockwise, otherwise it won’t work.

The profile coincides with the central line of the walls. Then we need to set the slope angle for each segment of the profile, the number of faces in the roof, the elevation (Z coordinate) of the top point of each face above the XY plane (SetFaceHeightByIndex), and the overhang. SetSegmentCount() works only with segments that have a curvature. This parameter sets the approximation accuracy, that is, the number of line segments that approximate an arc.

Here’s our roof:

There are lots of roof settings, so you can create a roof of almost any form: a gable roof, a hip roof, a hip-and-valley roof, you name it. Each face is a separate RoofSlab object which you can edit manually.

Adding a floor to the drawing

Now we need to add at least a very basic floor/foundation. To do it, we can use the slab object. Let’s write the add_slab function.

void add_slab( OdDbDatabasePtr pDatabase )
{
  OdDbObjectId idStyle =
    AECDbSlabStyle::GetAECObject( pDatabase, OD_T("Slab Style") );
  if ( idStyle.isNull() )
  {
    idStyle = AECDbSlabStyle::CreateAECObject( pDatabase, OD_T("Slab Style") );
  }

  AECDbSlabStylePtr pStyle =
    idStyle.openObject( OdDb::kForWrite );
  if ( !pStyle.isNull() )
  {
    pStyle->GetComponents()->Clear();

    AECSlabStyleCompPtr pCmp = AECSlabStyleComp::createObject();
    pCmp->SetName( OD_T("Base") );
    pCmp->GetPosition()->GetThickness()->SetUseBaseValue( false );
    pCmp->GetPosition()->GetThickness()->SetBaseValue( 6 );
    pCmp->GetPosition()->GetThicknessOffset()->SetUseBaseValue( false );
    pCmp->GetPosition()->GetThicknessOffset()->SetBaseValue( - 6 );
    pStyle->GetComponents()->Insert( pCmp );
  }

  AECDbSlabPtr pSlab =
    AECDbSlab::CreateAECObject( pDatabase->getModelSpaceId(), idStyle );

  {
    AECGe::Profile2D cBase;
    cBase.resize( 1 );
    cBase.front().appendVertex( OdGePoint2d( -5,   -5   ), 1 );
    cBase.front().appendVertex( OdGePoint2d( 115, -5   ) );
    cBase.front().appendVertex( OdGePoint2d( 115, 115 ) );
    cBase.front().appendVertex( OdGePoint2d( -5,   115 ) );
    cBase.front().setClosed();
    cBase.front().makeCCW();

    pSlab->GetSlabFace()->Init( cBase );
  }

  pSlab->SetThickness( 5 );

  pSlab->SetVerticalOffset( 0 );
  pSlab->SetHorizontalOffset( 0 );

  pSlab->SetPivotPoint( OdGePoint3d::kOrigin );

  AECDisplayManager cDM( pDatabase );
  AECDbDispPropsSlabPtr pOverrideModel =
    AECDbDispPropsSlab::cast( pSlab->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepSlabModel::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverrideModel.isNull() )
  {
    pOverrideModel->GetBoundaryCompByIndex( 0 )->SetColor( colorAt( 1 ) );
    pOverrideModel->GetBaselineComp()->SetColor( colorAt( 4 ) );
    pOverrideModel->GetPivotPointComp()->SetColor( colorAt( 5 ) );
    pOverrideModel->GetFasciaComp()->SetColor( colorAt( 6 ) );
    pOverrideModel->GetSoffitComp()->SetColor( colorAt( 7 ) );
    pOverrideModel->GetShrinkWrapBodyComp()->SetColor( colorAt( 8 ) );
  }

  AECDbDispPropsSlabPlanPtr pOverridePlan =
    AECDbDispPropsSlabPlan::cast( pSlab->OverrideDispProps(
    cDM.UpdateDisplayRepresentation( AECDbDispRepSlabPlan::desc() ) ).openObject( OdDb::kForWrite ) );
  if ( !pOverridePlan.isNull() )
  {
    pOverridePlan->SetIsOverrideCutPlane( false );
    pOverridePlan->GetHatchComp()->SetColor( colorAt( 1 ) );
    pOverridePlan->GetBelowCutPlaneBodyComp()->SetColor( colorAt( 2 ) );
    pOverridePlan->GetAboveCutPlaneBodyComp()->SetColor( colorAt( 3 ) );
    pOverridePlan->GetBelowCutPlaneOutlineComp()->SetColor( colorAt( 4 ) );
    pOverridePlan->GetAboveCutPlaneOutlineComp()->SetColor( colorAt( 5 ) );
  }
}

In this case, we create a new floor style, and then add components to it. A component is a floor piece that contains such parameters as thickness, elevation above the XY plane, name, material, index, etc. A floor can consist of several components with different settings. For example, if they have different elevations above the XY plane, you can use one floor object of that style to render all the floors and ceilings in a multistory building.

Style settings are applied to a specific object that contains the shape of the floor. In this case, we create a slab and initialize its profile with the same contour as the bottom part of the walls, only with a small offset at the edges. Then we use the display manager to redefine the colors of different components of the floor.

At last, we’ve created a house that looks like this:

Just to be sure, let’s try to load the resulting DWG file in Autodesk ACA:

That’s our house loaded in AutoCAD Architecture. It looks even better here, doesn’t it?

Conclusion

Using Teigha, we’ve created an empty database, initialized it for handling architectural objects, created a few commonly-used object types, and successfully saved all of them to a DWG file of the latest format.

Of course, I’ve simplified many things and described them in a cursory manner. But the purpose of this article is to demonstrate the capabilities of Teigha Architecture and give you a general idea of Teigha as an alternative solution for handling DWG files and AutoCAD objects.

This article was written by Aleksey Abramovsky, a Toptal C++ developer.

The Ultimate Introduction To Agile Project Management


The Brief

You’re in charge of delivering your company’s latest and greatest initiative that’s going to change the face of “Widgets International” forever. It’s a software project that’ll engage and enthrall your customers, make your colleague’s lives easier, and make the company millions in revenue. There’s a great deal of anticipation, fervour, excitement, and expectation. You need to get it done as quickly as possible, so your business can start to reap the benefits. The future success of the company depends on you. All eyes are on you. You cannot fail.

Agile project management

At first, you’re thinking to yourself “awesome, I’m up for the challenge. Let’s get this thing done!”. You pause for a moment, step back, and think to yourself “okay, so how do we do this?”. You start to talk to your colleagues and peers. You spend time searching for best practice software development and project management techniques, but the options and approaches are countless. There are acronyms and methodologies aplenty. Notable ones rise to the top. Doubt creeps in. Which one should we use? How can I guarantee success? What if I make the wrong decisions?

When it comes to managing software projects, there’s a heady mix of options supported by a myriad of opinion. Voices from the corners of the room whisper “try doing it this way”, others shout “this is the only way to do it”, and the rest just whimper “don’t manage it at all, just get on with it”. In reality, all those voices speak some truth. But what’s important is working out what’s right for your needs, your team, your business, and your customers.

Setting the Scene

There was a time when software project management sat squarely in one of three camps. There were the heavy frameworks that let you make decisions on how you execute and deliver, while offering a structure to maintain control and governance. There were prescriptive sequential methodologies like waterfall that forced you to plan lengthy projects, understand and commit to all your requirements, design and sign off complex systems, write lots of code, and then test (all before your customer gets to see it for the first time). And finally, the less prescriptive but iterative software development life cycles (SDLC) that encourage rapid prototyping or larger systems to be designed, built, and delivered in incremental steps, each building on top of the other.

Agile software development and Agile project management were born out of the inadequacies of the waterfall and the benefits of the iterative approaches to software delivery. They can trace their roots to the 1950’s, thought leadership in the 70’s, maturity in the 90’s and adoption through the 00’s. In 2001, a group of practitioners and experts created the agile manifesto, aimed at defining 4 values and 12 guiding principles that seek to embody the spirit of Agile software development and to encourage its evolution. And it has definitely evolved.

Now, simply calling something Agile isn’t particularly helpful. The word, even in a software context, means different things to different people or organizations. There are many facets, definitions, implementations, and interpretations. Each body that embraces agile tends to try to give it its own definition.

Simply calling something Agile isn’t particularly helpful.

Suffice to say that Agile software development and project management are a group of related behaviours, frameworks, techniques, and concepts that fundamentally favor the delivery of the right working software as early and as frequently as realistically possible.

I mentioned earlier that Agile, as applied to software development or project management, are different things. In a nutshell, Agile software development takes care of developing great software in a business as usual (BAU) or project context. Agile project management, on the other hand, takes care of the governance and control required to deliver complex projects including but not limited to software.

There are many Agile software development methods available, such as Scrum, Kanban, XP, and Lean Software Development. But just as the game of rugby is about more than the scrum, so is Agile. In isolation, these Agile paradigms do not address the full lifecycle of project management required in complex projects such as governance, resourcing, financial, explicit risk management, and many other important project management concepts. For these, you might want to consider PMI Agile or PRINCE2 Agile – think of it as “Governed Agility”.

Scrum and agile project management

Why Do We Need To Be Agile?

Long ago, we roamed the land to gather food and shelter to survive. They were simple needs, but pretty agile. Some time later, countries and economies grew and prospered on the back of the Industrial Revolution. This was the birth of management and control and the loss of agility. Now we’re in the Information Age or Revolution, where businesses employ knowledge workers. Knowledge workers are you, your partners, your colleagues and peers that endeavor to create great solutions to customer, business, social, economic and world problems. Knowledge workers apply analysis, knowledge, reasoning, understanding, expertise, and skills to often loosely defined and changing needs. These businesses and workers need methods and techniques that cannot be met by old Industrial Age processes and procedures. Agile supports interactions.

Virtually no software project can confidently set out at the beginning and know all that it needs in order to deliver valuable working software without change. Change presents both opportunities and risks to the success of a project. Unmanaged opportunities can mean the difference between a great company and an awesome company. Unmanaged risk spells disaster and ruin. Agile manages change.

Adopting agile allows you to be responsive to changing or new requirements. It empowers development teams to be the experts and make decisions supported by an engaged, trusting, and informed business. It enables you to deliver to customers what they really want. Ultimately, it puts you and your organization in control of delivering high quality valuable software that delivers on customer need and expectations whilst extracting a return on your investment dollars as early as possible. Agile creates value.

There is a cost to adopting agile. It doesn’t come for free. Transforming to an agile approach for software delivery can be a hard path to follow. However, if you internalize the agile philosophy, tread carefully, engage the right team with the right attitude, break things down, make it achievable and realistic and respond to feedback, you will reap rewards. Agile emphasizes collaboration.

The following lists some benefits you can expect:

  • Speed to market
  • Earlier revenue generation
  • Regular delivery of real value
  • Protection for your investment
  • Data, data, data
  • Better product quality
  • Manageable expectations
  • Greater customer satisfaction
  • Higher performing teams
  • Improved visibility on progress
  • Predictability, transparency, and confidence
  • Manageable risk

“Success is not final, failure is not fatal: it is the courage to continue that counts.”

Winston Churchill may never have actually said this, but I think it’s a pretty good summation of Agile. We know Agile is the best foot forward for most projects. It encourages you to strive for success, but we always iterate and keep building on it. Agile will encourage you to fail, but fail early and move on. Having the courage to continue and to build the right solution based on insight informed by your customer is what brings the reward.

The thing to keep in mind is you can tailor Agile to your needs. Use the method and governance that is right for your business. Wherever you start, be true to the content, context, and spirit of the method you use – keep it vanilla. If you’re just starting out – Learn. If you’ve been doing it for a while – Understand. If you’re becoming awesome – Apply. Finally, if your business and your projects are complex and interdependent – Govern. Over time, you and your teams will figure out what works best for your business.

Feasibility

So now you’re thinking “okay, I get it. How do I start? Where do I start?”. Well, with all good things, we start at the beginning. And with Agile, it’s by asking yourself “What business value do I want to deliver?”. After all, that’s why we undertake projects, to generate business value. In order to establish if the project is worth undertaking to derive the business value, you need to understand whether it is feasible.

Vision

Is your project projected to increase revenue, enter a new market, acquire more customers, improve customer perception, or make life easier for a given problem you’ve identified? With this in mind, you can state your “Vision”.

  • Your vision may come from different sources – your own bold startup to fix a common problem, business management strategy, your CEO’s pet project, a specific product team, or even your customer’s needs.
  • Try to take a step back from your own shoes and “see” what the future looks like with your new product or service in the hands of your customers.
  • Engage your stakeholders – the CEO, product guy, and customers. Workshop it, don’t attempt this in isolation. Challenge assumptions and validate arguments.
  • Write it down, keep it short. Focus on the business value.
  • Refine it until you all agree the vision resonates with everybody and meets a common interpretation that states where you’re heading.
  • Your vision, if valid, rarely changes. How you get there most certainly will.

People don’t buy what you do, or how you do it. They buy the “why” you do it. This is what creates the emotional connection between your business and your customers. The vision will help illustrate this.

Is it feasible?

Feasibility comes in at least a couple of shades. Typically, you’ll want to understand if your vision of a brighter future for your business and customers is both technically feasible and that it’s feasible for your business to make it happen.

  • If your vision is to make travel to anywhere across the world in under an hour, you may have a problem with the technical feasibility. Since science, physics, and technology haven’t quite caught up with that dream yet, your technical solution may not be viable in anything other than theory. In addition, if your solution was new, this would go well beyond the idea of a Minimal Viable Product (MVP).
  • To test the technical feasibility of your product, consider either exploring it further in a Discovery prototype project or by running a spike in the early stages of the project. You’ll know which method to use by thinking about the scale or complexity of the solution you have in mind.

    “Some of the best knowledge my teams have gained in understanding technical feasibility have come from performing a spike. And often, it’s the simplest solution that wins out!”

  • The second shade of feasibility to consider is whether you, your team or business has the skills and motivation to make it work. Using an example, if you’re great at baking cakes at home for your kids birthday, that’s sweet. But if you want to turn this into a business selling the finest cakes to the world, you need to understand if you can make it scale, handle the business as well as the production, manage distribution and fulfillment, and take care of customer service.
  • This type of vision might be achievable in the long run. But for now, possibly not. So scale it back, think small, take a small chunk that looks realistic and concentrate on delivering the best but smaller aspiration you can. If that manages to engage and delight your customers, get them coming back for more and telling their friends, then scale it up from there using your customer feedback as your guide and compass.
  • Also, you need to know if your project is feasible in terms of budget and timeframe. Can your business afford to deliver this project ? Is the timeframe achievable? Time and money are two of the three constraints in an Agile project that are fixed. We aim to deliver within a given fixed time and within a given fixed budget.
  • The quality of a product refers to the end product that your customers use and the engineering practices your team uses to deliver great, robust, and reliable software. Quality is also something we don’t short change on. Quality criteria, on the other hand, can change. If you’re not setting out to build a Ferrari, the product may not have a high quality perception. If you’re not building space rockets, then the tolerances attained in production terms may be much higher. Set the appropriate tone and expectation early on.

So now you’ve confirmed your dream is more than chocolate fancy, set about testing your assumptions, and proving to people that this endeavor is worth investing in.

Justification

Now depending on your circumstances, justification will come in different forms. But essentially, you want to prove that this project will satisfy customer success criteria, has a chance of success, will deliver value, and is affordable.

  • State your assumptions based on your customer need, then validate them. The Lean Startup gives great guidance on identifying and proving that your product is needed by your customers and will create value.
  • Write, test, and validate your business plan. Now this looks nothing like the ones your bank or Business and Finance major told you to produce. Don’t use them, they will be out of date before the ink is dry. Instead, check out the Business Model Canvas. This is essentially a short form business plan that keeps your focus on your value proposition, your customers, revenue, and costs. Use it to validate if you have a business that will work.

    “I ignored this advice once and spent a long time writing a lengthy traditional 50 page business plan. It got me nowhere. All the assumptions I had made were unfounded, and all the projections I made couldn’t be validated. It was a painful and expensive experience that taught me to never do it again.”

  • If you’re in a mature business with portfolios of projects being delivered in a complex environment, then financial modeling may be necessary. If you must, do this only after you’ve proven the above.
  • Once you’ve built your MVP, there may be a case for creating a more traditional business plan. For example, if you have to go for funding or selection within your company’s portfolio of competing projects and resources. But this will be a business plan based on and informed by the tools used above. It will be lighter too.
  • In any case, use these tools as living, breathing artifacts. Use them as your guide and bellwether. They are never static. Refer to them and revise them as your project or business evolves.

Once you have your justification and all your stakeholders are onboard, you’ll be on fire.

The Feasibility phase is typically performed once in the life of your project. You may find you revisit the vision and feasibility of the project, especially if your data, customers, market or business indicate so. At the very least, they will be your guiding lights throughout.

Initiation

Awesome. The decision has been made, the project has the green light and you’re ready to build. Well, nearly. I know you’re thinking, “c’mon already, really? If we don’t do this now we never will. Let’s get this show on the road!”. But consider this – Agile is nothing if not about delivering value early and often whilst delighting your customers along the way. Taking some time to figure out the best way to deliver your project is the best laid foundation for success.

The Team

In sport, by thinking about your favorite team game you’ll be able to recognise key roles that enable the team to perform as they do. Traditionally you’ll find a manager, a captain, and the rest of the squad. Outside of that, you’ll find coaches, physios, nutritionists, and an assortment of supporting staff. But if we look at the game of rugby, there’s a team within a team. The players that make up the scrum. This pack is made up of designated players whose job it is to win the ball back and continue play. When a scrum is in play, the players from each side then work, with no leader, as a single unit as collaboratively, communicatively, and efficiently as possible to get the ball back in possession. It is the game of rugby that inspired Jeff Sutherland to name his software development methodology, Scrum.

  • Scrum is not the only software development method in the Agile playbook. But it is the one that best describes the Agile concept and behaviors of working as a team, motivating individuals, creating trusting relationships, self-organization, servant-leadership, communication, transparency, and collaboration.
  • Your team will be formed largely by the circumstances you find yourself in. You may have developersavailable to you, some, none, or all of them may be familiar with Agile to varying degrees. You may want to hire a new team or partner with a 3rd party.
  • Other roles will be required too, but we’ll discuss those later.
  • It has been said that if your form your development team, then you’ve chosen your technology. As depending on where you bring your team together from, they’ll come with certain skill sets. So, think carefully how you form your development team and whether you need to perform a technical evaluation before you get to this point in your journey.
  • This brings us to cross functional teams. Teams work best when they work together, when individuals pitch in to get the job done regardless of their “title”. Try to build a team that is self sufficient and individuals that take on more than one role.
  • Build an environment, culture, and relationship center. A place where the team can deliver, unencumbered by constraints or restrictions. Give the team the tools, people, resources, and space to be effective and performant.
  • Keep team sizes to no more than 7 or 8. If you have a need for many more developers, break the teams down. Each team might then be responsible for a given functional area. If you you have multiple teams in multiple locations, consider holding a Scrum of Scrums. And where these are numerous in complex environments, use Agile project management.
  • Ensure that the team, business, stakeholders, and even customers have access to each other. Ensure they communicate and collaborate, and remove anything that gets in the way of progress. Daily communication is the best cure for project ailments. When people speak, they get stuff done.

There are many ways a team can be put together to deliver software.

Project Brief

In Feasibility, you figured out the “why” of your project and either built your confidence to forge ahead with your startup or got backing to proceed. The project brief is the living document that brings together the “why” with the “what” and “when” and “who”. It’s living, because as you progress from hence forward your knowledge, understanding, and path may change. To leave this document once written and never to return to it, just consign your thoughts to a point in time. In an Agile world, your point in time reference may change weekly or even daily early on, so it’s important to keep this fresh.

  • A great tool for encapsulating and maintaining your project brief is something that Jonathan Rasmusson calls the “Inception Deck” in his book “The Agile Samurai”. Here you’ll find great advice on ensuring that everybody that is interested in, affected by, or involved with your project is on the same page.
  • The greatest enemy to delivering projects is in having an unclear, inconsistent, or just plain different understanding of what the project is and what “requirements” are to be satisfied. If even one important stakeholder has a different understanding or view of what you’re doing, the consequences can be substantial.
  • A good project brief communicates:
    1. A common and agreed expectation between stakeholders and team members.
    2. An understanding of the project, with the same understanding across all parties.
    3. The goal, vision, objective, scope, and project context.
  • You’ll have a lot of good information for the brief gathered from Feasibility. The project brief will help you define and find the answers to searching questions. It will bring together stakeholders, your raison d’etre, high level scope, risks, target solution, budget, timeline, expectations, and your priorities.

“A colleague stopped me in a corridor once and asked me where he could get the project brief for the project. I quipped ‘We don’t need a brief, we’re Agile’. He looked confused, as if he was questioning my sanity or authority. He was right to do so.”

Before you proceed, ensure you’ve got everybody on the same page, workshop it, ask the difficult questions, and nail it somewhere where people can stop, read it, comment on it, and help revise it.

Culture and Ways of Working

You know the way your business works and its culture, the way it likes to get stuff done. Agile, by its very nature, may challenge some of these ways of working that your business has cultivated over the years. Don’t expect Agile to be implemented and for everybody to lovingly adopt it from the outset. Some people may find it confusing and view it only with dread and fear. Some people may openly refuse to engage in it. These are challenges and perceptions you must overcome. But in your early days, don’t go around waving the Agile stick beating anyone that won’t listen with it. That won’t build trust, adoption, or engagement.

“I was a fan of waving big proverbial sticks once, and it earned me a lot of negative press. I turned it around, but not before suffering considerable pain first.”

As you set out on your path of adoption, tread carefully, respectfully, and with empathy. If you’re in a creaking old traditional business, perhaps it won’t be the best approach to get the whole business aligned. Start small and incrementally earn respect and recognition. Start with your team only. Once you start delivering quicker software with better quality than ever before, people will start to notice and will want to come play your game. When they do, offer them the ball, take them out for a coffee, and ease them into your new world. Help them.

With your team, now that they know what the project is about and your plans for Agile adoption agreed, let the team decide how they wish to behave and operate as a team.

  • Guide your team to identify the Agile concepts, techniques, behaviors, and frameworks that you feel fits your collective needs.
  • Take requests from your team members as to what requirements they have to help them perform as best they can. Some of these requests you’ll be able to resolve immediately. Others, you may need to get budget or outside help. Do what you can to make it happen.
  • These are your first steps to becoming a true servant-leader.
  • Consider organizing some appropriate training in the concepts and techniques your team are choosing to adopt. This is the best way to ensure all of your team, even stakeholders, are on the same page and get the same message. Work with a supplier organization that can tailor their offering to your needs.
  • Be prudent. Nobody will be an Agile ninja after a few days in a workshop learning how to become Agile. Your path will be long. The word “become” is quite defining. Only when you truly embrace Agile will you see its value. It should excite you. If it excites you, then go excite others too.
  • Now that your team has agreed the concepts and techniques, had their wishes fulfilled and in Agile training, turn your team’s attention to themselves and what they expect from you, the business, and each other.
  • Define some Ways of Working (WoW) within and by the team helps build trust, relationship, and expectations. The WoW is no War and Peace. It should be short and to the point, between 7 and 10 bullet pointed sentences. These sentences state clearly how people behave, communicate, collaborate, support, deliver, and perform together. It should also state what the team expect from the business.

  • Agile is as much a mindset as it is guiding principles and concepts. It helps you develop in the way you behave, think, negotiate, interact, communicate, perform, and improve. It relies on motivated individuals supporting each other to reach a common aim, together as one. There is an African proverb:
If you want to go quickly, go alone. If you want to go far, go together.

By now your team should be super excited, energized and motivated. Now engage them further with your backlog of User Stories.

Backlog

Have no doubt in your mind that there’s uncertainty involved in your project. You can’t possibly know exactly what it will take to build the right product for your customers so early in its life. You cannot gaze wistfully into a crystal ball and predict the future.

The “backlog” or “product backlog” is where your requirements live. Agile favors the writing of short pithy statements that capture the essence of a “requirement”. The backlog is simply a long list of entries, each entry defining a single, discrete “requirement” as a User Story. And from now on, we’ll be using the word User Story, and not “requirement”. You’re probably asking “why?”. That’s a good question. For life eternal, stating the features or facets needed in a software project by a customer, have always been referred to as a “requirement” (ah, i used it again! Last time. Promise) That word has an interpretation that has no value in Agile. The Oxford dictionary defines it as:

“A thing that is needed or wanted.” Or, “A thing that is compulsory; a necessary condition”.

And unfortunately, if we set out defining what our solution should be, stating that things are “compulsory”, we will end up in trouble. It’s too easy to say that all these User Stories are compulsory. If we take that view, we run the risk of running over schedule and over budget in the attempt to deliver all of a given scope. It’s not a problem to say that, for this product these stories are needed for the solution to be viable, we just want to avoid the interpretation of that particular word.

  • Always write stories from the perspective of a persona. A persona represents a user or stakeholder of the solution. It’s a good idea to develop these persona before you start a backlog.
  • At this stage, only write short simple statements that basically suggest a reminder to have a deeper conversation about the User Story when the time is right.
  • Real people think in terms of tasks that they need to get done to achieve a goal. Write your stories from the persona perspective and in terms of what they need to get done.
  • You don’t need to write the full backlog, just write as much as you can imagine your customers will need for your product to be viable.
  • You’ll discover later on through the life of the product that User Stories will change, become more or less important, or can be deleted completely. Releasing often, getting feedback, and assessing what’s a priority will inform this behavior.
  • Don’t write stories in isolation. Engage your team, stakeholders, even your customers. Stories can be updated at any time in the life of a project, but should avoid being changed once development work has started on them.
  • Some of your stories may be considered as “Epics”. These are large stories that cover a lot, and will be broken down closer to the time of delivery into smaller stories.
  • Consider using the INVEST model, a checklist for validating the quality of a good User Story.
  • Anybody can add a story to the backlog. It should be placed at the bottom, or in a specially created “Parking Lot”. This added story serves as a prompt to discuss with the team and the business. If the business and team support it, it can then be estimated and prioritized
  • You may also consider those parts of the system that are most risky. If you have a User Story or feature that is complex, new, or technically unknown, prioritize these to the top of you backlog. This way you won’t be attempting to deliver the challenging and critical parts of your product just weeks before your first release.

Once you have a backlog that fulfills your needs, you can estimate the stories in it, rank them in order of priority, and build a release plan.

Hi Level Estimation and Prioritization

Hi Level Estimation is the process of sizing your backlog. How “big” is the project, and what value does it deliver? Prioritization is the process of deciding which stories are most important to you, the viability of the product, and the interests of your customers. We want to deliver the highest value items earliest to deliver the most value to the business, get feedback from the customer, and to not sweat the small stuff. The output will be an ordered backlog that is ranked in priority and sized.

  • Stories at the top are considered most valuable. We want to deliver the most valuable items as early as possible.
  • There are many techniques for sizing and estimating, but at this point you just want to get a good indicative feel of the size of a story.
    • Use t-shirt sizes, relative sizing, ideal days, or story points.
  • You won’t have all the available information at this point either, and that’s okay. Just run with it.
  • Engage your business stakeholders or product manager if you have one, and the team that will be doing the work.
  • We want those that will be designing, developing, and testing the work to size it, because the best people to estimate are the experts.
  • The team may start to break stories down into smaller parts. If this happens, write stories that are more granular but discrete.
  • The team may also start to rank some stories, as naturally some things have to get delivered before others to support the technology or a given user journey.
  • Between you and the team, you may also start to find holes in the backlog that need to be filled. Just fill those holes with new stories and estimate and prioritize as appropriate.
  • Prioritization is most easily performed using a MoSCoW analysis. MoSCoW is a simple technique that helps you decide which stories are “must haves” for your product to be successful.
  • You may do a prioritization pass before estimation begins. However, the sizing of certain elements may also determine a decision on priority and real business value. So play estimation and prioritization off of each other, but don’t squabble over it!
  • No two stories can be as important as the other. The story at rank 1 is more important or valuable than the story at rank 2.
  • A great way to demonstrate the importance or value of a story is it to add a monetary value to it. If for example, story A is thought to bring in $5000 of extra revenue, and story B might attract $100, then story A is more valuable. Equally, if story C saves the business more than story D, story C is more valuable.
  • Once you’ve sized your backlog, you’ll be left with a number. When we come to release planning, that number will help us understand how much can be delivered by our team within a given timeframe.

Remember that you don’t need to know all your user stories up front. Also, remember that it’s not necessary to deliver all of your stories before a customer sees your product. You want to remain Agile – and that means only creating what you need to when you need to, wasting as little as possible, and responding to changes in customer needs and market conditions. A roadmap will help you lay out your product and plan your objectives for the next 3, 6, 9, and 12 months.

Roadmap and Storymaps

A roadmap is exactly as it sounds, it offers the same as a roadmap of a country. It details the relative position of cities (or in your case, features) to each other and the routes that can be taken to get from city A to city B, or feature X and feature Z. It doesn’t tell you which route you should take, or how you should get there. It doesn’t tell you which mode of transport to use, but it might illustrate options to take the Highway or the train.

In a city, there are many roads, buildings, parks, services, and facilities. All features of a city. This is also true of the roadmap for your product. At this level, your roadmap shows major goals or milestones to be achieved. A goal is a logical grouping of themes, features, and User Stories rolled up in a consumable view that demonstrates tangible value. The roadmap of a software product shares this view and communicates your intent. It doesn’t necessarily show you how or when features will be delivered, only the relative value of the goals and features to you and your business.

One great way to demonstrate a roadmap is to generate a story map. This tool indicates customer valued prioritization. It lays out the backbone, or essential building blocks of your product. The walking skeleton hangs off the backbone and illustrates the features that make it a MVP. All the other features are what add further value and importance to the system. The story map lays features in relative position to each other and is an awesome visual tool.

It’s worth noting that after carrying out a story mapping exercise, your backlog may need to be refined. This will be apparent where stories have been split into multiple stories, identified as redundant, newly created, or as a higher or lower priority than previously thought. The story map is another artifact that is continually revisited and revised.

The Initiation phase is typically performed once in the life of your project. However, many of the tools and documents you created will be revisited and revised throughout the project.

Release Planning

“At last”, I hear you cry, “finally some planning”. Well, you have essentially been planning all through the Feasibility and Initiation stages, we just didn’t call it as such. This is evidence of iterative or adaptive planning – the art of only planning enough to achieve your immediate and most valuable goals. We’ll see later more about adaptive planning, but for now release planning is our focus.

Your release plan may well be determined by external events. Perhaps there is a trade show you want to demonstrate your app at, or your customers will get most benefit using your app on the run up to Christmas. These are timeline events that your goals may be aligned with. You would most likely plan to deliver user stories or features that make the most sense to facilitate these events. If there are no external dates that you need to consider, then you can just go with feature prioritization and delivering earliest what makes most sense and delivers the most value to your customers.

  • If you created a story map in Initiation, this will help guide your release plan. Use it to identify your MVP, the minimum feature set that will get your product in the hands of your customers, start earning revenue, get feedback, and acquire more customers.
  • The story map will help you carve out future releases too. But keep in mind that as you learn, get feedback, inspect and adapt, future releases may change. So don’t plan in great detail.
  • You may have from 2 – 4 releases in a 12 month period. Don’t do less because your first release is your MVP and gets your foot in your door, after which you’ll want to iterate and release more features and fix bugs in a regular cycle. Don’t do more unless you’re performing well and have plenty of Agile techniques and tools in place to manage continuous delivery.
  • Each release is a timebox which is broken down into smaller iterations. An iteration is a timebox. The timebox is one of the most important control measures in Agile.
  • To size your release:
    • Divide your release timebox by 2. This will give you how many iterations you have. So if you have a release of 12 weeks, you get 6 two week iterations.
    • Then remove two iterations – you’ll reserve one for a “Sprint 0” iteration and another for a “Release Iteration”. This leaves you with 4 development iterations.
    • Work with your team and product owner to fill each iteration with stories, starting from the top of the backlog and working down. When the team thinks they’ve filled an iteration with enough stories they can realistically achieve based on their capacity in the 2 week time frame, repeat for the next iteration(s). Use the release plan and story map to guide what goes into each iteration.
    • Do not plan the next release yet. You’ll do that as you near the end of the current release.
    • Take the user stories from each of your iterations and add up the story sizes. So if your iteration 1 has 25 story points, but iterations 2, 3, and 4 have 10, 45, and 65 points respectively, you will need to refactor. Target an equal number of story points in each iteration. This becomes your anticipated velocity – the amount of valuable “stuff” completed for each iteration.
  • The team may not have worked together before. They are almost certainly working on a new problem or product. They will not perform at their best from day one. For this reason, your velocity may be volatile in the first few sprints. But as the team settles down, it should stabilize. Use this data to refactor your release planning which in turn helps you plan your product with a known velocity and confidence.
  • If necessary, break stories into smaller chunks and resize.
  • Your release plan, especially early in the life of a project and a new team, is only ever a guide. Do not treat it as a commitment or guarantee that all or these exact stories will get delivered as planned. As your team matures, work gets done and trust and confidence builds, so will the accuracy of your plans.
  • Never force your team to “commit” to more than they are comfortable with. Trust their judgement and their expertise.
  • Future release planning exercises will be simpler, because you’ll take the release size, multiply the number of iterations by your team’s velocity, and fill the release plan with the stories that add up to velocity x number of iterations.

Consider this as an example. If you go to a restaurant to eat, you wouldn’t go ahead and order all the items on the menu and expect to eat it all in one sitting. You’d never be able to eat it all, you might not afford the cost, you’ll be sick of food, and the restaurant may close whilst you’re eating the 5th of 19 courses! You may not leave a happy customer, the restaurant may not find you to be a great customer, and the experience will be bad all round. More likely, if you love the restaurant, it’ll be because you enjoyed a lovely meal there once. You’ll decide to go back and enjoy a different meal. You’ll tell your friends, you’ll go there often. The moral of the story is:

  • Plan your releases in small chunks.
  • Consider your capacity.
  • Don’t take on more than you can realistically achieve.
  • Revisit the plan often to see what you can change and improve.
  • Plan, Execute, Inspect, Learn, Adapt – Repeat.

Release planning takes place often in a software project. Each new release requires a release plan. The release plan can also be refactored at any time during a project. Just take care to not overdo it and fall into zombified planning psychosis. At the end of release planning, you’ll want to prepare for the first iteration, which is where we’re happily going next!

Product Iterations

Your team is in place, they’re motivated, you have an engaged business, your initial planning is done – you’re now ready to build your dreams.

We’ve talked earlier about some of the tools, techniques, and concepts that Agile subscribes to. There are many resources already available that do a great job at laying the foundations for delivering an Agile software project. Pick one, keep it vanilla, and grow into your Agile journey. To help shorten the trauma in deciding the right Agile software development methodology to start with, I’d recommend Scrum. And only Scrum. The temptation will be there to use elements of other methodologies. Don’t do it yet. Save that kind of change until you have lived and breathed Scrum for 6 or 12 months. Then, if either you’ve determined it alone does not work for you or you want to mature as a team, steadily introduce new methods, techniques, or frameworks.

I choose Scrum as the recommended approach for new team’s adopting Agile because it has all the basics built in. It’s very popular and has many good quality communities and resources online, in books, or in the training room. It will serve you well, even for the smallest of teams. The rest of this post is dedicated to discussing some important aspects of software delivery that you, your team, and stakeholders should always keep in mind.

Adaptive Planning

Planning in an Agile project is an ongoing process. We do some initial planning up front, just enough to understand what we know at a given point. Our initial plans will be loosely defined and flawed. And then we iterate our planning, adapting to new information, planning in greater detail just before we enter into delivery, responding to changing scope. This is one way of minimizing waste – only putting effort into planning when we need to.

  • The team and the business, or its informed and authorized representative such as a product manager, actively plan together. The team because they are the experts that will deliver, the product manager because he is the expert who can guide the needs of the business.
  • Estimates for user stories will be less accurate the further out they are from being developed. For example, epics will attract high level estimates that will be based on a lot of unknowns. Well defined granular user stories that are estimated just before a sprint starts will be much more accurate.
  • There are many estimation “scales” you can use. Use the technique that feels most right for your team and the right stage of planning – wide band delphi, planning poker, ideal time, relative sizing, story points, or t-shirt sizes.

  • When sizing a story, one size really does fit all. All stories should be sized using the same technique and encompass all elements such as design, development, testing, refactoring. When you come to do iteration planning for a sprint, certain tasks can be created which all contribute to the completion of the story.
  • Always factor risk, unknowns, outside influences, your team’s capacity, and ever improving velocity in planning.
  • If user stories that were taken into a sprint were not completed, we do not extend the timebox. These unfinished or unstarted items are put back to the top of the backlog and taken into the next sprint.
  • Always plan to deliver the least amount required to achieve a given goal. Identify techniques to prune features. Reduce waste, find the real value that can be realistically delivered with your time constrained resources.

Story Creation

Stories get elaborated upon when you need them. You don’t need full story explanations for features that are 6 months away from being delivered. Writing them at the beginning may be wasted effort, when that need disappears from scope. Write your stories at most 2 iterations before they are needed. Reducing that timeframe to 1 sprint is preferable.

Let’s take your time in Sprint 0 to set the scene. In two weeks you’ll start development. It’s now time to take enough of the stories from the top of the backlog that could potentially get delivered on sprint 1. You might take 10 or 15 percent more if you’re unsure on your velocity. These one liners can now be expanded into truly valuable documents with scenarios, acceptance criteria, and wireframes. If the wireframes haven’t been created yet, now’s the time to do so. You might find as you review these candidate stories that they need to be broken down. Perhaps they were Epics that couldn’t possibly be completed in a sprint. If you break stories down, re-estimate them with the team.

A good story follows the following rules: – Written in a common format, e.g. AS A I WANT SO THAT . – Includes acceptance criteria that the story must meet to be considered acceptable to the business for sign off. – Uses language that the business and your customers understand. – Follows the INVEST model. – Includes all supporting documents to inform the developer, designer and tester: wireframes, technical design overview, other assets. – Meets your standard Definition of Done criteria.

Sprinting

Whether you call it a sprint, an iteration or a timebox, each incremental delivery of your Agile project is time constrained. The timebox doesn’t shorten and it doesn’t lengthen. Focus on 2 week iterations at the beginning. You might find that 1, 3 or 4 weeks suits your business model better. Once you choose a length, don’t change it. You want to maintain a regular cadence, or a sustainable pace. This means the team and business focus delivering regular software without mad dash overtime being employed to get the job done and releasing potentially shippable increments every two weeks.

  • Working in small increments has a huge benefit. It means you’re only focusing on the immediate future of delivery; and with new input, feedback and learning, you can respond to change within a short iterative cycle.
  • At the beginning of a release, perform a sprint 0. This lets the team, the business, and your project release get geared up and sets the mindset for successful delivery. Draw out the base technical framework and architecture that will support the foundation for the release. Setup environments, accounts, and tools. Perform spikes to understand difficult or unknown problems. Elaborate your user stories in readiness for sprint 1. Sprint 0 is about getting prepared.
  • During a release, keep refining the backlog. As you understand more or learn something new, your priorities may change, new requirements may unfold, and what you previously thought was a great feature may get removed entirely.
  • Don’t start work that has no chance of completing within a sprint. If it can’t, break it down into smaller chunks. And don’t start new work when previously started work has not been completed. You create no value by starting lots of things that can’t be considered complete. Further, avoid context-switching. This is the activity of starting one task, getting disturbed, starting another, and at it’s most problematic not completing either.
  • Limit the amount of work that is in progress by the team at any one time. For example, if you have 3 developers and 1 tester, you may put a WIP limit on the developers and on the tester.
  • Never add more work to a sprint after it has been planned. Never stop a sprint part way through. The exceptions to this are:
    • If you performed faster than expected, consider taking the next story from the top of the backlog, as long as it is suitably prepared.
    • If the sprint is performing so badly that it will not complete. This usually only happens where there has been a catastrophe of some description.
    • If the sprint objective can no longer be supported due to immediate changing needs of the business.
  • If you do cancel a sprint, do it gracefully, take time to refocus, and start a new sprint as you would any other.
  • Toward the end of a release, consider a final release sprint. No new features are written, some bugs can be cleaned up, and preparations can be made to actually release a new version of your product to your customers. That’s not to say you can’t make incremental customer releases in the meantime – it’s just that this is a controlled, measured, and sustainable release mechanism.
  • At the end of a release you might consider a one week sprint. In this sprint, you might work with the team to breakdown some new ideas, figure out some new technology. These are great tools that benefit the business and it gives the team some briefing space to think and be creative. It’s not for goofing off, it will still create value. Equally, everyone needs a break. Taking a vacation at this time helps keep your cadence and velocity in good shape when you’re mid-release.

Defining Done

Defining what “done” really means is very important. There are many versions of “done” within the life of your project – what it means to be “done” with a story, release, or whole project. It all boils down to what you, your team, and business will consider as complete to the right level of quality to satisfy readiness to ship.

For your team, the definition of a “done” story will be something like all code complete, peer reviewed, meets the defined acceptance criteria, unit tested, UAT’ed, and pushed to your code repository. To enable the passing of a story from designer to developer to tester, will have definitions of “done” to be accepted by the next person in the chain. Your product owner will have expectations as to what this means to her in order to release the product increment to your customers. In any case, everybody must be aware of what “done” means and be a responsible party in ensuring it’s meaning is met. Define your definition of “done”, communicate it, agree upon it and, evolve it. Done done.

Continuous Measurement

If you can’t measure it, you can’t manage it. The same goes for improvements. The need to gather empirical data in an Agile project is almost as vital as having blood course through your veins! How do you know what needs managing, correcting or improving if there’s no data? Well, simply put, you’ll be relying on gut feel and unsubstantiated guess work. Which falls apart pretty quickly under scrutiny. And depending on who’s doing the scrutinizing, this can be a rather uncomfortable place to be. So from the outset of your project, ensure you know how you are going to demonstrate progress and by what measures others are going to view your success.

Fortunately, Agile comes loaded with useful tools and techniques. The first thing to do is go back to the Agile Manifesto, type the following words into your favourite word processor, blow them up to 96pt, print, and apply to the wall for all to see :

Working software is the primary measure of progress.

Your greatest demonstrable power in delivering software is to show it to people working, doing what it’s supposed to do. Not only will this make your customers happy, it will earn your team great respect and grease the wheels for greater adoption through the business.

Here are some other tools:

  • The daily standup: There are a few variations of this ceremony, but the essence is to have all team members talk to each other face to face, keep it short, keep it focused, and keep it light. If anything needs discussion at great length, park it for a longer conversation between those needed after the standup. If impediments are raised, write them up like a story, add them to your backlog, and get them addressed asap. Anything that is impeding your team slows their progress and will be demonstrable in reduced velocity and software that does not meet expectations.
  • Velocity: Is a historical tool. It’s a little like those financial warnings you get that says past performance is no guarantee of future performance. But in Agile’s case, we do hope to achieve a team velocity that is largely smooth. It’s velocity that allows us to project future performance and confidence in our plans. There may be influences outside your control which might lower the number of story points output for a given sprint. If this happens, you’ll probably be able to recover. Never use velocity as a stick to beat your team with, it will win you no favors. One thing for sure is that velocity will be erratic for the first 2 – 4 sprints. Somewhere in that timeframe thought you should start to see consistency and stability. If your velocity is wavering from one extreme to another, you’ve got a problem which you’ll need to fix with your team.

  • The burndown chart: Now this measure of progress is a thorny one. For that reason, I haven’t given a link to go find out more, you’ll have to do your own research and agree as a team and business which works for you. The reason it’s thorny? Well, not one resource out there tells the same story! One thing agreed is that it will show how, within a sprint or a release, how you’re performing against your timebox. If maintained daily, it will show if you’re on track or deviating. Some teams use it to represent how much value is left to be created, more often than not, others use it to show how much work is left to be completed. The former is a celebration of success and value generation, the latter is less inspiring and motivating.
  • The burnup chart: If you must show work completion rates, use the burndown chart for that. But using the burn-up chart allows you to show how much value has been created and how much more value you’re planning to create by the end of the sprint. A much more motivating tool for a team to both demonstrate to the business what has been done (or little if things aren’t going so well…), and what they still have their sights set on. In any case, use the charts to spot where progress is not tracking as expected, look for patterns or deviations and get on top of them to fix the underlying problem as soon as possible. Update them daily for sprints and once at the end of a sprint for release version charts.
  • Task boards: These are great visual radiators for demonstrating value being created. When updated daily, or at any point in the day, they immediately show progress being made. If combined with Kanban, they’re also great tools for visualising flow and blockages in the system. If you can see that loads of development is completed, but testing is not as productive, you can see the problem happening and respond appropriately and swiftly.

Other tools to consider are Agile Earned Value, cycle time, and Cumulative Flow Diagrams (CFD’s).

Keep these measures, charts, and other tools visible, preferably loud and proud on a wall for all to see. The team, stakeholders, and other interested parties can immediately see the status of a project. It’s transparent and serves as a valuable communication tool. If you can’t put these artifacts on a wall, use tools that are sharable and collaborative and make sure those that want access have it.

Continuous Improvement

Throughout your Agile life, seek to identify and learn where improvements can be made. Lessons are not captured and learned at the end of a project. It’s like passing your driving test and tentatively taking your first drive without an instructor. You’ll know what works and what you’re supposed to do, but over time you’ll tailor your driving skill and capabilities, learning new techniques. You’ll even pick up bad habits. Look for them, understand them and find ways to improve.

There are many opportunities for identifying what does not work and applying remedies. The built-in approach to this in Agile is the retrospective. This is the primary tool for reflection and adjustment. At the end of every sprint, take time with the team to improve how work gets done, how quality is delivered, how efficiency can be maximized, how waste can be minimized and how capacity is increased. When you identify measures for improvement, don’t be tempted to fix all your problems right away. Identify the ones that will have most impact and can be implemented in the next sprint. Measure and observe. If it had the desired impact, lock it up, write it up into your ways of working and definitions of done. If it doesn’t work, think again. Any lessons learned that don’t get put into the upcoming sprint can be parked and prioritized for attention in the next sprint.

Tailor the process. Remove anything that does not work. Remove impediments. Your maturity as an Agile team will know no bounds if you let it.

Beyond Agile Project Management

It’s important to know what happens after the project is delivered. Support & maintenance are key to ensuring that once the project is in customer’s hands it remains performant, customer feedback can be factored into future releases, and customer issues are dealt with appropriately. A project is a unique, time constrained endeavour. The product it delivers has a life after the project team has been disbanded. Ensure you are capable of supporting the product once it is live.

Agile projects co-exist with more traditional approaches. Balancing the requirements for budgetary control and stakeholder visibility with the Agile aims of flexibility and responsiveness.

A governance framework or Agile governance model is used in conjunction with a standard Agile processes, such as Scrum. They work in two specific ways:

  • They provide a wrapper for an Agile project by clarifying the processes that occur outside of development iterations (sprints). This includes providing clear criteria for the successful completion of project initiation and proper validation of the decisions and plan.
  • They shift the emphasis of specific parts of the standard Agile process and highlight particular principles and techniques that need governance or support governance.

In today’s ever-changing world, organizations and businesses are keen to adopt a more flexible approach to delivering projects, and want to become more Agile. However, for organizations delivering projects and programmes, and where existing formal project management processes already exist, the informality of many of the agile approaches is daunting and is sometimes perceived as too risky. These project-focused organizations need a mature agile approach – agility within the concept of project delivery – Agile Project Management.

Learn and grow with your adoption of Agile. Only ever do what your team is comfortable with, ensure their voice is heard, and act on their requests. Encourage your team to adopt new and more valuable techniques when the time is right. Negotiate with the business and encourage them to understand what it means to be an Agile organization.

Enjoy the journey.

This article was written by PAUL BARNES, Toptal’s Head of Projects.

MySQL Master-Slave Replication on the Same Machine


MySQL replication is a process that enables data from one MySQL database server (the master) to be copied automatically to one or more MySQL database servers (the slaves). It is usually used to spread read access on multiple servers for scalability, although it can also be used for other purposes such as for failover, or analyzing data on the slave in order not to overload the master.

As the master-slave replication is a one-way replication (from master to slave), only the master database is used for the write operations, while read operations may be spread on multiple slave databases. What this means is that if master-slave replication is used as the scale-out solution, you need to have at least two data sources defined, one for write operations and the second for read operations.

MySQL master-slave replication

MySQL developers usually work on only one machine and tend to have their whole development environment on that one machine, with the logic that they are not dependent on a network or internet connection. If a master-slave replication is needed because, for example, they need to test replication in a development environment before deploying changes elsewhere, they have to create it on the same machine. While the setup of a single MySQL instance is fairly simple, we need to make some extra effort to setup a second, and then a master-slave replication.

For this step-by-step tutorial, I’ve chosen Ubuntu Linux as the host operating system, and the provided commands are for that operating system. If you want to setup your MySQL master-slave replication on some other operating system, you will need to make modifications for its specific commands. However, general principles of setting up the MySQL master-slave replication on the same machine are the same for all operating systems.

MySQL master-slave replication

Installation of the first MySQL instance

If you already have one instance of MySQL database installed on your machine, you can skip this step.

The easiest way to install MySQL on the Ubuntu is to run the following command from a terminal prompt:

sudo apt-get install mysql-server

During the installation process, you will be prompted to set a password for the MySQL root user.

Setting up mysqld_multi

In order to manage two MySQL instances on the same machine efficiently, we need to use mysqld_multi.

First step in setting up mysqld_multi is the creation of two separate [mysqld] groups in the existing my.cnffile. Default location of my.cnf file on the Ubuntu is /etc/mysql/. So, open my.cnf file with your favorite text editor, and rename existing [mysqld] group to [mysqld1]. This renamed group will be used for the configuration of the first MySQL instance and will be also configured as a master instance. As in MySQL master-slave replication each instance must have its own unique server-id, add the following line in [mysqld1] group:

server-id = 1

Since we need a separate [mysqld] group for the second MySQL instance, copy the [mysqld1] group with all current configurations, and paste it below in the same my.cnf file. Now, rename the copied group to [mysqld2], and make the following changes in the configuration for the slave:

server-id           = 2
port                = 3307
socket              = /var/run/mysqld/mysqld_slave.sock
pid-file            = /var/run/mysqld/mysqld_slave.pid
datadir             = /var/lib/mysql_slave
log_error           = /var/log/mysql_slave/error_slave.log
relay-log           = /var/log/mysql_slave/relay-bin
relay-log-index     = /var/log/mysql_slave/relay-bin.index
master-info-file    = /var/log/mysql_slave/master.info
relay-log-info-file = /var/log/mysql_slave/relay-log.info
read_only           = 1

To setup the second MySQL instance as a slave, set server-id to 2, as it must be different to the master’s server-id.

Since both instances will run on the same machine, set port for the second instance to 3307since it has to be different from the port used for the first instance, which is 3306 by default.

In order to enable this second instance to use the same MySQL binaries, we need to set different values for socket, pid-file, datadir and log_error.

We also need to enable relay-log in order to use the second instance as a slave (parameters relay-log, relay-log-index and relay-log-info-file), as well as to set master-info-file.

Finally, in order to make the slave instance read-only, parameter read_only is set to 1. You should be careful with this option since it doesn’t completely prevent changes on the slave. Even when the read_only is set to 1, updates will be permitted only from users who have the SUPER privilege. MySQL has recently introduced the new parameter super_read_only to prevent SUPER users making changes. This option is available with version 5.7.8.

Apart from the [mysqld1] and [mysqld2] groups, we also need to add a new group [mysqld_multi] to the my.cnf file:

[mysqld_multi]
mysqld     = /usr/bin/mysqld_safe
mysqladmin = /usr/bin/mysqladmin
user       = multi_admin
password   = multipass

Once we install the second MySQL instance, and we start up both, we will give appropriate privileges to the multi_admin user in order to be able to shut down MySQL instances.

Create new folders for the second MySQL instance

In the previous step we prepared the configuration file for the second MySQL instance. In that configuration file two new folders are used. The following Linux commands should be used in order to create those folders with appropriate privileges:

mkdir -p /var/lib/mysql_slave
chmod --reference /var/lib/mysql /var/lib/mysql_slave
chown --reference /var/lib/mysql /var/lib/mysql_slave
 
mkdir -p /var/log/mysql_slave
chmod --reference /var/log/mysql /var/log/mysql_slave
chown --reference /var/log/mysql /var/log/mysql_slave

Additional security settings in AppArmor

In some Linux environments, AppArmor security settings are needed in order to run the second MySQL instance. At least, they are required on Ubuntu.

To properly set-up AppArmor, edit /etc/apparmor.d/usr.sbin.mysqld file with your favorite text editor, add the following lines:

/var/lib/mysql_slave/ r,
/var/lib/mysql_slave/** rwk,
/var/log/mysql_slave/ r,
/var/log/mysql_slave/* rw,
/var/run/mysqld/mysqld_slave.pid rw,
/var/run/mysqld/mysqld_slave.sock w,
/run/mysqld/mysqld_slave.pid rw,
/run/mysqld/mysqld_slave.sock w,

After you save the file, reboot the machine in order for these changes to take effect.

Installation of the second MySQL instance

Several different approaches may be followed for the installation of the second MySQL instance. The approach presented in this tutorial uses the same MySQL binaries as the first, with separate data files necessary for the second installation.

Since we have already prepared the configuration file and the necessary folders and security changes in the previous steps, the final installation step of the second MySQL instance is the initialization of the MySQL data directory.

Execute the following command in order to initialize new MySQL data directory:

mysql_install_db --user=mysql --datadir=/var/lib/mysql_slave

Once MySQL data directory is initialized, you can start both MySQL instances using the mysqld_multi service:

mysqld_multi start

Set the root password for the second MySQL instance by using the mysqladmin with the appropriate host and port. Keep in mind, if host and port are not specified, mysqladmin will connect to the first MySQL instance by the default:

mysqladmin --host=127.0.0.1 --port=3307 -u root password rootpwd

In the example above I set the password to “rootpwd”, but using a more secure password is recommended.

Additional configuration of mysqld_multi

At the end of the “Setting up mysqld_multi” section, I wrote that we will give appropriate privileges to the multi_admin user later on, so now is the time for that. We need to give this user appropriate privileges in both instances, so let’s first connect to the first instance:

mysql --host=127.0.0.1 --port=3306 -uroot -p

Once logged in, execute the following two commands:

mysql> GRANT SHUTDOWN ON *.* TO 'multi_admin'@'localhost' IDENTIFIED BY 'multipass';
mysql> FLUSH PRIVILEGES;

Exit from the MySQL client, and connect to the second instance:

mysql --host=127.0.0.1 --port=3307 -uroot -p

Once logged in, execute the same two commands as above:

mysql> GRANT SHUTDOWN ON *.* TO 'multi_admin'@'localhost' IDENTIFIED BY 'multipass';
mysql> FLUSH PRIVILEGES;

Exit from the MySQL client.

Start both MySQL instances automatically on boot

The final step of setting up mysqld_multi is the installation of the automatic boot script in the init.d.

To do that, create new file named mysqld_multi in /etc/init.d, and give it appropriate privileges:

cd /etc/init.d
touch mysqld_multi
chmod +x /etc/init.d/mysqld_multi

Open this new file with your favorite text editor, and copy the following script:

#!/bin/sh

### BEGIN INIT INFO
# Provides:      	scriptname
# Required-Start:	$remote_fs $syslog
# Required-Stop: 	$remote_fs $syslog
# Default-Start: 	2 3 4 5
# Default-Stop:  	0 1 6
# Short-Description: Start daemon at boot time
# Description:   	Enable service provided by daemon.
### END INIT INFO
 
bindir=/usr/bin
 
if test -x $bindir/mysqld_multi
then
    mysqld_multi="$bindir/mysqld_multi";
else
    echo "Can't execute $bindir/mysqld_multi";
    exit;
fi
 
case "$1" in
    'start' )
   	 "$mysqld_multi" start $2
   	 ;;
    'stop' )
   	 "$mysqld_multi" stop $2
   	 ;;
    'report' )
   	 "$mysqld_multi" report $2
   	 ;;
    'restart' )
   	 "$mysqld_multi" stop $2
   	 "$mysqld_multi" start $2
   	 ;;
    *)
   	 echo "Usage: $0 {start|stop|report|restart}" >&2
   	 ;;
esac

Add mysqld_multi service to the default runlevels with the following command:

update-rc.d mysqld_multi defaults

Reboot your machine, and check that both MySQL instances are running by using the following command:

mysqld_multi report

Setup master-slave replication

Now, when we have two MySQL instances running on the same machine, we will setup the first instance as a master, and the second as a slave.

One part of the configuration was already performed in the chapter “Setting up mysqld_multi”. The only remaining change in the my.cnf file is to set binary logging on the master. To do this, edit my.cnf file with the following changes and additions in the [mysqld1] group:

log_bin                    	= /var/log/mysql/mysql-bin.log
innodb_flush_log_at_trx_commit  = 1
sync_binlog                	= 1
binlog-format              	= ROW

Restart the master MySQL instance in order for these changes to take effect:

mysqld_multi stop 1
mysqld_multi start 1

In order for the slave to connect to the master with the correct replication privileges, a new user should be created on the master. Connect to the master instance using the MySQL client with the appropriate host and port:

mysql -uroot -p --host=127.0.0.1 --port=3306

Create a new user for replication:

mysql> CREATE USER 'replication'@'%' IDENTIFIED BY 'replication';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%';

Exit from the MySQL client.

Execute the following command in order to create a dump of the master data:

mysqldump -uroot -p --host=127.0.0.1 --port=3306 --all-databases --master-data=2 > replicationdump.sql

Here we use the option --master-data=2 in order to have a comment containing a CHANGE MASTER statement inside the backup file. That comment indicates the replication coordinates at the time of the backup, and we will need those coordinates later for the update of master information in the slave instance. Here is the example of that comment:

--
-- Position to start replication or point-in-time recovery from
--

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=349;

Import the dump you created in the previous step into the slave instance:

mysql -uroot -p --host=127.0.0.1 --port=3307 < replicationdump.sql

Finally, in order for slave instance to connect to the master instance, the master information on the slave needs to be updated with the appropriate connection parameters.

Connect to the slave instance using the MySQL client with the appropriate host and port:

mysql -uroot -p --host=127.0.0.1 --port=3307

Execute the following command in order to update the master information (take the replication coordinates from the dump file replicationdump.sql, as explained above):

mysql> CHANGE MASTER TO
	-> MASTER_HOST='127.0.0.1',
	-> MASTER_USER='replication',
	-> MASTER_PASSWORD='replication',
	-> MASTER_LOG_FILE='mysql-bin.000001',
	-> MASTER_LOG_POS=349;

Execute the following command in order to start the slave:

mysql> START SLAVE;

Execute the following command in order to verify the replication is up and running:

mysql> SHOW SLAVE STATUS \G

Congratulations. Your MySQL master-slave replication on the same machine is now successfully set up.

MySQL master-slave replication

Wrap Up

Having a master-slave replication configured in your development environment is useful if you need it for a scale-out solution in the production environment. This way, you will also have separate data sources configured for write and read operations so you can test locally that everything works as expected before further deployment.

Additionally, you may want to have several slave instances configured on the same machine to test the load balancer that distributes the read operations to several slaves. In that case, you may use this same manual to setup other slave instances by repeating all the same steps.

This article was written by IVAN BOJOVIC, a Toptal SQL developer.

Who Knew Adobe CC Could Wireframe?


Wireframing is a major step in designing any user interface whether a website, application or software product. Without distraction in the form of visuals, colours, typography, styles and effects you can be more focused on defining content hierarchy and user experience.

Doing low fidelity wireframes and prototypes will help you test and iterate more often and in earlier phases, work faster and develop products that your users will love.

There are a lot of different wireframing tools to choose from in the wild. Which one you choose will depend on your personal preferences and workflow style.

Just like a lot of designers moving to digital design from the print world, I’m an expert in the Adobe applications like Illustrator, InDesign, and Photoshop. I can use them efficiently, everywhere and at any time (even if someone wakes me up in the middle of the night and refuses to give me a cup of coffee). However, these have also become the tools I use to do web and application visual design. So, for my workflow to be the most efficient I use them for wireframing too.

Who knew Adobe CC Could Wireframe

With every project, I usually start designing by doing very rough sketches on paper, or lately more often, if not near my office desk, on my iPad or smartphone screen. These sketches are there only to focus my thoughts regarding the chosen concept and the client will probably never see any of them. When I feel confident enough that my idea works, I jump right away into wireframing. I usually use Adobe Illustrator and InDesign combined. Illustrator for creating most of the UI kit elements and InDesign for wireframing itself.

In this article, I’ll explain a step by step process of how to incorporate those tools into your wireframing workflow as well. But, before I go into details, let me show you what the strengths and weaknesses are of using InDesign as a primary wireframing tool.

The Pros and Cons of Using Adobe InDesign as a Wireframe and Prototyping Tool

For some time now Adobe InDesign has been not only a desktop publishing application, but it’s also widely used for digital publishing and EPUB creation, and there are also some reasons why it is a suitable tool for wireframing too:

  • Master pages – You can quickly and consistently apply global design elements like navigation, headers, footers and so on. You can create as many master pages as you need, and on top of it, one master can be based on another.
  • Solid grid support – Allows you to create easily and apply different kinds of grids, complementing columns, horizontal and vertical guides to create modules, and subgrids for greater complexity and precision.
  • Alternate layouts – Enables wireframes for multiple devices and orientations in the same file.
  • CC Libraries – Give easy access and reuse different assets like commonly used objects, colours, character and paragraph styles. Create assets in InDesign, Illustrator, Photoshop or with the Adobe Capture mobile app, whichever you prefer.
  • Layers – Provide you the ability to organize, group, show/hide and lock/unlock objects and content selectively in the wireframe. Every page of a multi-page InDesign document has the same number and order of layers. All of the changes you make to layers are reflected on all pages, like visibility, stacking order or deletion.
  • Styles and tables – Give complete control over the look of your text, objects and tables through the use of InDesign styles. Styles can be based on each other providing the ability to cascade desired formatting easily throughout the document. Creation and formatting of tables that you can use as wireframe content elements or even helpers for layout purpose is very simple.
  • Typekit integration – In high fidelity mock-ups, you can use any of the Typekit fonts that sync to the desktop.
  • Interactivity and animations – You can use Adobe InDesign built-in interactivity and animation features to create different states of the web or application design for interaction. Most people use these features while creating magazines for Digital Publishing Solution and fixed layout EPUB export but they can be suited for prototyping as well.
  • Export options – InDesign can export the wireframes and prototypes you create in a variety of formats. Interactive PDF will probably be your format of choice in the majority of cases, but you can also use quite new Publish Online Feature to convert your document to interactive HTML that can be viewed in modern desktop and mobile browsers. Unfortunately, you don’t have any control over the export using Publish Online, and exported file is hosted on Adobe servers. You can share the prototype URL to your client, or embed it into your site. For more control and direct export to HTML5 you can use the in5 extension from Ajar Productions.

Adobe InDesign has many pros to be used as a wireframe and prototyping tool, but it also has some disadvantages:

  • Lack of predefined wireframe templates and elements – Since InDesign is not meant to be a wireframing tool, you have to create and prepare templates and object assets by yourself. (I’ll show you how to make that preparation step later in this article.) The good news is that most of this work will be done only once and after a few hours of work you’ll be ready to jump start your InDesign wireframing. Also, there are a lot of assets and wireframe kits that you can download from the internet, so there is no need to draw everything yourself.
  • Interactivity and animation features are limited and can be time-consuming – Although you can easily connect pages and add some interactivity and animations that process sometimes takes a long time. Some of the simple interactions can not be achieved simply, and you have to figure out how to do it. If you haven’t been using InDesign interactivity features yet, you’ll have a slight learning curve before you’ll be able to apply them efficiently.
  • InDesign documents can’t export directly as layered PSD files – If you do your visual design in Adobe Photoshop and want to have separated wireframe elements from building your design on, then you have to export your wireframes to PDF first, then open that PDF in Illustrator and export as layered PSD. People working on the Mac can also use a free script written by Rob Day to save InDesign files as a layered PSD.

Good Wireframe Preparation is Half of the Work

Start by fine tuning your working environment. If you do not already have a saved Workspace in Illustrator and InDesign for this kind of work, create one. In Illustrator start with predefined Web workspace and adapt it to your convenience: close panels you will not use often and open the ones you will, then rearrange them to suit your work style. When done, save the workspace by choosing Window > Workspace > New Workspace… Do the same thing in InDesign using Digital Publishing workspace as a starter.

Assembling Wireframe/Mockup/Prototype Kits

Efficient wireframing workflow using Illustrator and InDesign requests that you invest some time in making your user interface assets kit first. Since the introduction of Adobe Creative Cloud, CC Libraries are the best way for storing all your UI kits components.

You can create one or more Libraries for wireframing and prototyping purposes. For example, you can create one Library for websites wireframing, other for iOS application, third for Android applications and so on.

To create library Open Libraries panel and choose Create New Library from additional drop-down menu. Assets you put in libraries can be made and used in different Adobe desktop or mobile apps on all devices you log into with your Adobe ID. That means you can start with the project on your iPad or iPhone, continue on the desktop computer in the office and made last minute changes on home laptop with all the same assets available on all devices. If you work as a part of a larger team, library assets can be shared between team members and you can collaborate on them. Libraries can contain colours, graphics, layer styles (Photoshop only), paragraph and character styles. You add asset in library by clicking on the corresponding button at the bottom of CC Library panel with respective element selected. You can also add graphic assets by dragging them directly from your artboard to Libraries panel. Assets in libraries are organized by categories. To stick with good practices, rename each asset with meaningful name. Libraries are searchable, and finding an asset in a hip by typing beginning of it’s name will save you tons of time, especially when you have many different items in libraries. You can search only current or all libraries created. To change asset name just double click on it and type a new one.

Creating Kits Components

Although Adobe InDesign has some basic drawing tools that are pretty similar to Illustrator’s, Illustrator is a much better choice for drawing different kinds of elements in your wireframe. As a rule of thumb, make all kit elements that request some drawing beyond basic geometric shapes in Illustrator and simpler elements, that contain text you’ll need to change in layout, like simple buttons, in InDesign.

For starters, make a list of all the elements in the wireframe you’ll need, like navigation elements, page elements including images, video frames and text boxes, icons, avatars, form elements and all other interface elements. After your list is completed you can head to Illustrator and InDesign for elements creation. Start by creating a new document for wireframe or mockup kit components. Double-check that you choose either Web/Devices Profile in Illustrator or Web/Digital Publishing Intent from New Document dialog box, so pixels are used as units, and color mode is RGB.

Make wireframe kit assets as simple as possible, because they need to give fast visual cues for what they represent without been too much detailed. You should use very limited colour palettes of preferably a few shades of grey. Visually accentuate elements that are more important by coloring them with darker shades, or by giving them bigger contrast. For higher fidelity mockups or prototypes, you’ll create UI kits with more detailed elements that use each project’s respective color palette. For easy access to color palettes add them to CC Libraries too.

Who knew Adobe CC Could Wireframe

Adobe Illustrator assets in CC Libraries

Assets you add to Libraries from Illustrator are linked by default (since Adobe CC 2015). That means that when you modify a library asset in Illustrator, changes are reflected in all instances used. If you want to add unlinked asset to document press Option/Alt key while dragging it from the panel.

Who knew Adobe CC Could Wireframe

When you use linked Illustrator assets in InDesign they have little cloud icon in the upper left corner when document is viewed in Normal mode and they are all listed in Links panel. If you modify Library asset in Illustrator changes in the InDesign document won’t be done automatically. Cloud icon will be replaced with Modified Link exclamation mark icon, and you’ll have to update these links.

InDesign assets you place in InDesign document are not linked. That means that you can edit instances independently of the original, and when the original asset is modified those changes are not reflected on assets you already put into layout.

Who knew Adobe CC Could Wireframe

Use those properties when creating wireframes on your behalf: add assets to Library from Illustrator when you assume they’ll need to be modified and updated globally or add them from InDesign when you know you’ll want to modify them individually. Note that you can also make graphics in Illustrator and then Copy/Paste them to InDesign, modify if needed and then add to Library as InDesign asset.

If you happen to forgot which graphic asset is created by which application look at the right side of each item in Library panel while using Show items as a list preview more.

Adobe InDesign Flexibility with Texts

Since you’ll probably want to be able to easily change texts and its formatting create text based assets in InDesign. InDesign has a nice feature for filling text boxes with placeholder text. When you draw text box just make a right mouse click on it and choose Fill With Placeholder Text. You can easily add text box to Library like any other graphic element just by dragging. When you use those text assets later as a part of your wireframes layout you can modify text box itself or text it contains however you like.

Consider to make button UI elements in InDesign too. To create a button, make text frame first, type respective text in it and then use Object > Text Frame Options to define Inset spacing. Adjust Vertical Justification of text inside a box by choosing desired option from Align drop-down menu. Switch to Auto-Size tab and choose appropriate Auto-Sizing (that would probably be Width Only), and convenient reference point. If you do not want let InDesign break your text in more than a one line check No Line Breaks option.

Give Yourself a Hand

There are lot of Adobe Illustrator wireframing and prototyping UI kits available on the internet you can buy or even download for free if you want to fasten your wireframing preparation phase. Maybe you already have lot of those elements drawn that you can dig from your achieved projects. Open those documents, tweak any previously made elements if needed, and put them into respective Libraries.

If you are designing for a particular platform, for example iOS or an Android application be sure that you carefully read their human interface guidelines and use appropriate assets. Don’t put pressure on yourself that every asset possible has to be in your Libraries before you start with actual wireframing process because you can also add assets to your libraries later and on the go.

Creating InDesign Wireframe Templates

Who knew Adobe CC Could Wireframe

There is another important preparation step left that you also need to do: create InDesign templates you’ll use for making wireframes. Start by creating a new document with Web or Digital Publishing Intent and define appropriate page size for the screens you are designing for. Since it is recommended that you use some kind of a grid for laying out your wireframe elements set up the margins and create column grid by setting number of columns and the gutter space. You can change those settings later from Layout > Margins and Columns with respective master page (or pages) selected in Pages panel. If you need horizontal guides and complementary vertical guides, create them manually or make additional grid by using Layout > Create Guides… When creating grid you can help yourself with one of the online grid calculator tools like the Gridulator.

You can also create additional templates for presentation purposes with device mockup as a frame for your wireframes. Since one InDesign document can be placed into another, you can create wireframes in one InDesign document and then place it into another one for presentation purposes. Although it might seem complicated at first it is actually very simple and offers effective workflow. Keeping actual wireframe in separate document makes it easier to continue building from approved wireframes to polished visual design. On the other hand, you have presentation ready template to place wireframes into, add labels and comments and you are all set up to show your best to the client. When you make modifications on wireframe file just update it like any other link in presentation document, and ta-daaa! all changes are in your presentation too.

In the Layers panel you can prepare separate layers for different kind of the content: user interface elements, interactive features, gestures, labels or notes. If you’ll need more layers for a specific project you’ll be able to easily add them anytime during the wireframing process.

When you are done with creating save your templates as InDesign .indt template files. After all the templates you need are saved you are finally ready to start with wireframing efficiently.

Building Wireframes

First things first – start with the Master page. Drag all elements that will be the same on all screens of your project from Library. If you are designing website those are usually headers with logo and navigation and footer. Since you can make more than one Master page and they can be based on each other don’t forget to take advantage of that feature. For example, depending on the project, you can create a Master page for one website category, then make new Masters based on the first one and change on them only elements that need to be different for other categories you’re also having.

Who knew Adobe CC Could Wireframe

You can’t select, change or delete Master elements on regular document pages unless you click on them while holding Command/Control + Shift to override the master. Once your element is overridden you can change any of its parameters or completely delete it from layout. Keep in mind that even when you override the element, it’s parameters that you haven’t change are still dependent on the Master. For example, if you override an element and change it’s color, it’s size is still connected to Master’s and if you change it on Master page it will also be modified on that element you previously overrided. When inserting additional pages to your wireframing document remember to base them on the respective master. If you need to change the Master for pages already in layout, select them in Pages panel, make a right click and choose Apply Master to Pages… When you are finished with Masters, move along with pages for all screens of your project. Use Library assets and arrange them using Smart Guides and Align options to create final UI wireframe layout.

If you are making design for more than one screen size, make alternate layouts from Layouts > Create Alternate Layout or Pages panel. You can use liquid layout rules when creating alternate layouts to automatically or semi-automatically adopt page elements from one size and orientation to another or you can manually control them. For applying and testing Liquid Layout Rules use Page Tool or open panel: Window > Interactive > Liquid Layout.

Who knew Adobe CC Could Wireframe

Adding interactivity

Adobe InDesign has a bunch of interactivity features that you can take advantage of when creating wireframes or prototypes. Which features you’ll include depends on the final format you plan to save your wireframes, prototypes or presentation into. If you are exporting as PDF, interactivity is limited but you can at least make links between screens work. You’ll use Hyperlinks panel to create them. Select the item you want to behave as a link and click on New Hyperlink icon. From Link To drop down menu choose Page and enter desired page number. Repeat that action on all items you want to behave as links between the screens. You can also add hyperlinks to objects residing on the Master pages, and that can be a real time saver: create links on Master page once and they will work on all screens of your document.

Another interactive feature that you can use in interactive PDF format is button with Show/Hide action and that you can use to build all kinds of pop-up content.

You can create button from any graphic, text, image or a group. To create a button from a selected object use Window > Interactive > Buttons and Forms panel and click on the Convert to Button icon. Buttons can have different states created for Normal, Rollover and Click appearance. To add rollover or click state to button click on them in Buttons panel to activate and then edit button appearance for that state the way you want it to look. To add an action to a button, click on a plus sign and choose it from the list. Take into account that actions under SWF/EPUB will not work in interactive PDF. For creating popup elements choose Show/Hide Buttons and Forms. To hide buttons until triggered check Hidden Until Triggered option. You can include multistate objects in interactive PDF, but only if you export them as SWF first and then place those SWFs back in InDesign layout for PDF export. That workflow is tedious and those PDFs can not be properly seen in all PDF readers so you better avoid doing this unless really necessary.

If you want to convert your document to HTML prototype using InDesign CC 2015 Publish Online feature you can use much more interactive options like animations, multistate objects, multiple button actions, including all those intended for SWF/EPUB export.

You can add simple animations using Animation panel and choosing one of the built-in Presets from drop-down menu and setting its properties. One object can have only one animation applied, but if you need to add more of them, group your object with empty box and use the new animation on that newly created object. And repeat that few times if needed. For objects you need to show different states create multi state object. Create object for each state. Object can be a single element (picture, text box, graphic) or a group of different elements. Open Window > Interactive > Object States panel, select all objects you created for multi state object and click on the New button at the panel bottom. After your multi state object is made you’ll need to create buttons to go from one object state to another. Go To Next State or Go To Previous State actions reveal the specific object state with Go to State action.

If you want to have scrollable frame in your wireframe/prototype easiest way to create one is by usingUniversal Scrolling Frames extension from Ajar Productions. After you download and install extension you can use it as InDesign panel. For scrollable frame you’ll need to make content and one frame for container. Content can be text frame, picture, or more elements combined. If using more than one element as a content don’t forget to group them. When you are finished with making content and container box select the content and do Edit > Cut. Then select container and paste content inside by using Edit > Paste Into. Select container and using Universal Scrolling Frames adjust desired scroll direction.

By combining buttons, multistate objects, animations and scrollable frames you can achieve rich interactive experience.

To test interactivity in InDesign use EPUB Interactivity Preview panel. You can preview single page or whole document. Enlarge preview panel to your convenience.

If you haven’t being using Adobe InDesign interactive features yet be prepared that they have some learning curve, but with a little practice and few trial and error attempts you’ll quickly master them.

Exporting finished documents

When you are done with the wireframe and presentation file making all that is left is show your great ideas to the client in best way possible. For that purpose you’ll need to export your wireframes in one of the formats your client can preview. Although InDesign files can be exported in variety of formats you are probably going to use the most common Interactive PDF, or Publish Online feature if testing functional low or high fidelity prototypes. To save as interactive PDF choose Adobe PDF (Interactive) from Format drop-down menu in Export dialog box and adjust properties as needed. Do not forget to tick Forms and Media if there are interactive elements that you want to include. Clients can view PDF wireframes in free Adobe Reader and write all their comments in that same file.

You can also use PDF document you export from InDesign to create InVision (or some other tool that supports PDFs) prototype. If your standard prototyping tool, perhaps Marvel, can’t import PDF export your InDesign wireframe pages as JPEG or PNG images.

To export interactive HTML prototype that can be seen in modern browsers on different devices go to File > Publish Online or click on the Publish Online button from Application Bar. After the document is prepared to be published online and then uploaded you can copy document URL to share with all the stakeholders and start with reviewing process. You can also embed that published prototype on your site.

Downside of Publish Online feature is that it doesn’t have any additional control over the export and files are always stored at Adobe’s servers. Also it’s still preview feature and you can’t be sure in which direction Adobe is going to develop it or even discontinue.

Once your wireframe/prototype is exported it’s time for testing, reviewing and iterating process to start.

This article was written by  IVANA MILIČIĆ, a Toptal freelance designer.

Follow

Get every new post delivered to your Inbox.