Category Archives: Build and integration

An Introduction to Protocol-oriented Programming in Swift


Protocol is a very powerful feature of the Swift programming language.

Protocols are used to define a “blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality.”

Swift checks for protocol conformity issues at compile-time, allowing developers to discover some fatal bugs in the code even before running the program. Protocols allow developers to write flexible and extensible code in Swift without having to compromise the language’s expressiveness.

Swift takes the convenience of using protocols a step further by providing workarounds to some of the most common quirks and limitations of interfaces that plague many other programming languages.

An Introduction to Protocol-oriented Programming in Swift

Write flexible and extensible code in Swift with protocol-oriented programming.

In earlier versions of Swift, it was possible to only extend classes, structures, and enums, as is true in many modern programming languages. However, since version 2 of Swift, it became possible to extend protocols as well.

This article examines how protocols in Swift can be used to write reusable and maintainable code and how changes to a large protocol-oriented codebase can be consolidated to a single place through the use of protocol extensions.

Protocols

What is a protocol?

In its simplest form, a protocol is an interface that describes some properties and methods. Any type that conforms to a protocol should fill in the specific properties defined in the protocol with appropriate values and implement its requisite methods. For instance:

protocol Queue {
    var count: Int { get }
    mutating func push(_ element: Int) 
    mutating func pop() -> Int
}

The Queue protocol describes a queue, that contains integer items. The syntax is quite straightforward.

Inside the protocol block, when we describe a property, we must specify whether the property is only gettable{ get } or both gettable and settable { get set }. In our case, the variable Count (of type Int) is gettable only.

If a protocol requires a property to be gettable and settable, that requirement cannot be fulfilled by a constant stored property or a read-only computed property.

If the protocol only requires a property to be gettable, the requirement can be satisfied by any kind of property, and it is valid for the property to also be settable, if this is useful for your own code.

For functions defined in a protocol, it is important to indicate if the function will change the contents with the mutating keyword. Other than that, the signature of a function suffices as the definition.

To conform to a protocol, a type must provide all instance properties and implement all methods described in the protocol. Below, for example, is a struct Container that conforms to our Queue protocol. The struct essentially stores pushed Ints in a private array items.

struct Container: Queue {
    private var items: [Int] = []
    
    var count: Int {
        return items.count
    }
    
    mutating func push(_ element: Int) {
        items.append(element)
    }
    
    mutating func pop() -> Int {
        return items.removeFirst()
    }
}

Our current Queue protocol, however, has a major disadvantage.

Only containers that deal with Ints can conform to this protocol.

We can remove this limitation by using the “associated types” feature. Associated types work like generics. To demonstrate, let’s change the Queue protocol to utilize associated types:

protocol Queue {
    associatedtype ItemType
    var count: Int { get }
    func push(_ element: ItemType) 
    func pop() -> ItemType
}

Now the Queue protocol allows the storage of any type of items.

In the implementation of the Container structure, the compiler determines the associated type from the context (i.e., method return type and parameter types). This approach allows us to create a Containerstructure with a generic items type. For example:

class Container<Item>: Queue {
    private var items: [Item] = []
    
    var count: Int {
        return items.count
    }
    
    func push(_ element: Item) {
        items.append(element)
    }
    
    func pop() -> Item {
        return items.removeFirst()
    }
}

Using protocols simplifies writing code in many cases.

For instance, any object that represents an error can conform to the Error (or LocalizedError, in case we want to provide localized descriptions) protocol.

The same error handling logic can then be applied to any of these error objects throughout your code. Consequently, you don’t need to use any specific object (like NSError in Objective-C) to represent errors, you can use any type that conforms to the Error or LocalizedError protocols.

You can even extend the String type to make it conform with the LocalizedError protocol and throw strings as errors.

extension String: LocalizedError {
    public var errorDescription: String? {
          Return NSLocalizedString(self, comment:””)
    }
}


throw “Unfortunately something went wrong”


func handle(error: Error) {
    print(error.localizedDescription)
}

Protocol Extensions

Protocol extensions build on the awesomeness of protocols. They allow us to:

  1. Provide default implementation of protocol methods and default values of protocol properties, thereby making them “optional”. Types that conform to a protocol can provide their own implementations or use the default ones.
  2. Add implementation of additional methods not described in the protocol and “decorate” any types that conform to the protocol with these additional methods. This feature allows us to add specific methods to multiple types that already conform to the protocol without having to modify each type individually.

Default Method Implementation

Let’s create one more protocol:

protocol ErrorHandler {
    func handle(error: Error)
}

This protocol describes objects that are in charge of handling errors that occur in an application. For example:

struct Handler: ErrorHandler {
    func handle(error: Error) {
        print(error.localizedDescription)
    }
}

Here we just print the localized description of the error. With protocol extension we are able to make this implementation be the default.

extension ErrorHandler {
    func handle(error: Error) {
        print(error.localizedDescription)
    }
}

Doing this makes the handle method optional by providing a default implementation.

The ability to extend an existing protocol with default behaviors is quite powerful, allowing protocols to grow and be extended without having to worry about breaking compatibility of existing code.

Conditional Extensions

So we’ve provided a default implementation of the handle method, but printing to the console is not terribly helpful to the end user.

We’d probably prefer to show them some sort of alert view with a localized description in cases where the error handler is a view controller. To do this, we can extend the ErrorHandler protocol, but can limit the extension to only apply for certain cases (i.e., when the type is a view controller).

Swift allows us to add such conditions to protocol extensions using the where keyword.

extension ErrorHandler where Self: UIViewController {
    func handle(error: Error) {
        let alert = UIAlertController(title: nil, message: error.localizedDescription, preferredStyle: .alert)
        let action = UIAlertAction(title: "OK", style: .cancel, handler: nil)
        alert.addAction(action)
        present(alert, animated: true, completion: nil)
    }
}

Self (with capital “S”) in the code snippet above refers to the type (structure, class or enum). By specifying that we only extend the protocol for types that inherit from UIViewController, we are able to use UIViewController specific methods (such as present(viewControllerToPresnt: animated: completion)).

Now, any view controllers that conform to the ErrorHandler protocol have their own default implementation of the handle method that shows an alert view with a localized description.

Ambiguous Method Implementations

Let’s assume that there are two protocols, both of which have a method with the same signature.

protocol P1 {
    func method()
    //some other methods
}




protocol P2 {
    func method()
    //some other methods
}

Both protocols have an extension with a default implementation of this method.

extension P1 {
    func method() {
        print("Method P1")
    }
}




extension P2 {
    func method() {
        print("Method P2")
    }
}

Now let’s assume that there is a type, that conforms to both protocols.

struct S: P1, P2 {
    
}

In this case, we have an issue with ambiguous method implementation. The type doesn’t indicate clearly which implementation of the method it should use. As a result, we get a compilation error. To fix this, we have to add the implementation of the method to the type.

struct S: P1, P2 {
    func method() {
        print("Method S")
    }
}

Many object-oriented programming languages are plagued with limitations surrounding the resolution of ambiguous extension definitions. Swift handles this quite elegantly through protocol extensions by allowing the programmer to take control where the compiler falls short.

Adding New Methods

Let’s take a look at the Queue protocol one more time.

protocol Queue {
    associatedtype ItemType
    var count: Int { get }
    func push(_ element: ItemType) 
    func pop() -> ItemType
}

Each type that conform to the Queue protocol has a count instance property that defines the number of stored items. This enables us, among other things, to compare such types to decide which one is bigger. We can add this method through protocol extension.

extension Queue {
    func compare<Q>(queue: Q) -> ComparisonResult where Q: Queue  {
        if count < queue.count { return .orderedDescending }
        if count > queue.count { return .orderedAscending }
        return .orderedSame
    }
}

This method is not described in the Queue protocol itself because it is not related to queue functionality.

It is therefore not a default implementation of the protocol method, but rather is a new method implementation that “decorates” all types that conform to the Queue protocol. Without protocol extensions we would have to add this method to each type separately.

Protocol Extensions vs. Base Classes

Protocol extensions may seem quite similar to using a base class, but there are several benefits of using protocol extensions. These include, but are not necessarily limited to:

  1. Since classes, structures and, enums can conform to more than one protocol, they can take the default implementation of multiple protocols. This is conceptually similar to multiple inheritance in other languages.
  2. Protocols can be adopted by classes, structures, and enums, whereas base classes and inheritance are available for classes only.

Swift Standard Library Extensions

In addition to extending your own protocols, you can extend protocols from the Swift standard library. For instance, if we want to find the average size of the collection of queues, we can do so by extending the standard Collection protocol.

Sequence data structures provided by Swift’s standard library, whose elements can be traversed and accessed through indexed subscript, usually conform to the Collection protocol. Through protocol extension, it is possible to extend all such standard library data structures or extend a few of them selectively.

Note: The protocol formerly known as CollectionType in Swift 2.x was renamed to Collection in Swift 3.

extension Collection where Iterator.Element: Queue {
    func avgSize() -> Int {
        let size = map { $0.count }.reduce(0, +)
        return Int(round(Double(size) / Double(count.toIntMax())))
    }
}

Now we can calculate the average size of any collection of queues (Array, Set, etc.). Without protocol extensions, we would have needed to add this method to each collection type separately.

In the Swift standard library, protocol extensions are used to implement, for instance, such methods as map, filter, reduce, etc.

extension Collection {
    public func map<T>(_ transform: (Self.Iterator.Element) throws -> T) rethrows -> [T] {




    }
}

Protocol Extensions and Polymorphism

As I said earlier, protocol extensions allow us to add default implementations of some methods and add new method implementations as well. But what is the difference between these two features? Let’s go back to the error handler, and find out.

protocol ErrorHandler {
    func handle(error: Error)
}


extension ErrorHandler {
    func handle(error: Error) {
        print(error.localizedDescription)
    }
}


struct Handler: ErrorHandler {
    func handle(error: Error) {
        fatalError("Unexpected error occurred")
    }
}


enum ApplicationError: Error {
    case other
}


let handler: Handler = Handler()
handler.handle(error: ApplicationError.other)

The result is a fatal error.

Now remove the handle(error: Error) method declaration from the protocol.

protocol ErrorHandler {
    
}

The result is the same: a fatal error.

Does it mean that there is no difference between adding a default implementation of the protocol method and adding a new method implementation to the protocol?

No! A difference does exist, and you can see it by changing the type of the variable handler from Handler to ErrorHandler.

let handler: ErrorHandler = Handler()

Now the output to the console is: The operation couldn’t be completed. (ApplicationError error 0.)

But if we return the declaration of the handle(error: Error) method to the protocol, the result will change back to the fatal error.

protocol ErrorHandler {
    func handle(error: Error)
}

Let’s look at the order of what happens in each case.

When method declaration exists in the protocol:

The protocol declares the handle(error: Error) method and provides a default implementation. The method is overridden in the Handler implementation. So, the correct implementation of the method is invoked at runtime, regardless of the type of the variable.

When method declaration doesn’t exist in the protocol:

Because the method is not declared in the protocol, the type is not able to override it. That is why the implementation of a called method depends on the type of the variable.

If the variable is of type Handler, the method implementation from the type is invoked. In case the variable is of type ErrorHandler, the method implementation from the protocol extension is invoked.

Protocol-oriented Code: Safe yet Expressive

In this article, we demonstrated some of the power of protocol extensions in Swift.

Unlike other programming languages with interfaces, Swift doesn’t restrict protocols with unnecessary limitations. Swift works around common quirks of those programming languages by allowing the developer to resolve ambiguity as necessary.

With Swift protocols and protocol extensions the code you write can be as expressive as most dynamic programming languages and still be type-safe at compilation time. This allows you to ensure reusability and maintainability of your code and to make changes to your Swift app codebase with more confidence.

We hope that this article will be useful to you and welcome any feedback or further insights.

This article was originally posted on Toptal

Advertisements

Getting the Most Out of Your PHP Log Files: A Practical Guide


It could rightfully be said that logs are one of the most underestimated and underutilized tools at a freelance php developer’s disposal. Despite the wealth of information they can offer, it is not uncommon for logs to be the last place a developer looks when trying to resolve a problem.

In truth, PHP log files should in many cases be the first place to look for clues when problems occur. Often, the information they contain could significantly reduce the amount of time spent pulling out your hair trying to track down a gnarly bug.

But perhaps even more importantly, with a bit of creativity and forethought, your logs files can be leveraged to serve as a valuable source of usage information and analytics. Creative use of log files can help answer questions such as: What browsers are most commonly being used to visit my site? What’s the average response time from my server? What was the percentage of requests to the site root? How has usage changed since we deployed the latest updates? And much, much more.

PHP log files

This article provides a number of tips on how to configure your log files, as well as how to process the information that they contain, in order to maximize the benefit that they provide.

Although this article focuses technically on logging for PHP developers, much of the information presented herein is fairly “technology agnostic” and is relevant to other languages and technology stacks as well.

Note: This article presumes basic familiarity with the Unix shell. For those lacking this knowledge, an Appendix is provided that introduces some of the commands needed for accessing and reading log files on a Unix system.

Our PHP Log File Example Project

As an example project for discussion purposes in this article, we will take Symfony Standard as a working project and we’ll set it up on Debian 7 Wheezy with rsyslogd, nginx, and PHP-FPM.

composer create-project symfony/framework-standard-edition my "2.6.*"

This quickly gives us a working test project with a nice UI.

Tips for Configuring Your Log Files

Here are some pointers on how to configure your log files to help maximize their value.

Error Log Confguration

Error logs represent the most basic form of logging; i.e., capturing additional information and detail when problems occur. So, in an ideal world, you would want there to be no errors and for your error logs to be empty. But when problems do occur (as they invariably do), your error logs should be one of the first stops you make on your debugging trail.

Error logs are typically quite easy to configure.

For one thing, all error and crash messages can be logged in the error log in exactly the same format in which they would otherwise be presented to a user. With some simple configuration, the end user will never need to see those ugly error traces on your site, while devops will be still able to monitor the system and review these error messages in all their gory detail. Here’s how to setup this kind of logging in PHP:

log_errors = On
error_reporting = E_ALL
error_log = /path/to/my/error/log

Another two lines that are important to include in a log file for a live site, to preclude gory levels of error detail from being to presented to users, are:

display_errors = Off
display_startup_errors = Off

System Log (syslog) Confguration

There are many generally compatible implementations of the syslog daemon in the open source world including:

  • syslogd and sysklogd – most often seen on BSD family systems, CentOS, Mac OS X, and others
  • syslog-ng – default for modern Gentoo and SuSE builds
  • rsyslogd – widely used on the Debian and Fedora families of operating systems

(Note: In this article, we’ll be using rsyslogd for our examples.)

The basic syslog configuration is generally adequate for capturing your log messages in a system-wide log file (normally /var/log/syslog; might also be /var/log/messages or /var/log/system.log depending on the distro you’re using).

The system log provides several log facilities, eight of which (LOG_LOCAL0 through LOG_LOCAL7) are reserved for user-deployed projects. Here, for example, is how you might setup LOG_LOCAL0 to write to 4 separate log files, based on logging level (i.e., error, warning, info, debug):

# /etc/rsyslog.d/my.conf

local0.err      /var/log/my/err.log
local0.warning  /var/log/my/warning.log
local0.info     -/var/log/my/info.log
local0.debug    -/var/log/my/debug.log

Now, whenever you write a log message to LOG_LOCAL0 facility, the error messages will go to /var/log/my/err.log, warning messages will go to /var/log/my/warning.log, and so on. Note, though, that the syslog daemon filters messages for each file based on the rule of “this level and higher”. So, in the example above, all error messages will appear in all four configured files, warning messages will appear in all but the error log, info messages will appear in the info and debug logs, and debug messages will only go to debug.log.

One additional important note; The - signs before the info and debug level files in the above configuration file example indicate that writes to those files should be perfomed asynchronously (since these operations are non-blocking). This is typically fine (and even recommended in most situations) for info and debug logs, but it’s best to have writes to the error log (and most prpobably the warning log as well) be synchronous.

In order to shut down a less important level of logging (e.g., on a production server), you may simply redirect related messages to /dev/null (i.e., to nowhere):

local0.debug    /dev/null # -/var/log/my/debug.log

One specific customization that is useful, especially to support some of the PHP log file parsing we’ll be discussing later in this article, is to use tab as the delimiter character in log messages. This can easily be done by adding the following file in /etc/rsyslog.d:

# /etc/rsyslog.d/fixtab.conf

$EscapeControlCharactersOnReceive off

And finally, don’t forget to restart the syslog daemon after you make any configuration changes in order for them to take effect:

service rsyslog restart

Server Log Confguration

Unlike application logs and error logs that you can write to, server logs are exclusively written to by the corresponding server daemons (e.g., web server, database server, etc.) on each request. The only “control” you have over these logs is to the extent that the server allows you to configure its logging functionality. Though there can be a lot to sift through in these files, they are often the only way to get a clear sense of what’s going on “under the hood” with your server.

Let’s deploy our Symfony Standard example application on nginx environment with MySQL storage backend. Here’s the nginx host config we will be using:

server {
    server_name my.log-sandbox;
    root /var/www/my/web;

    location / {
        # try to serve file directly, fallback to app.php
        try_files $uri /app.php$is_args$args;
    }
    # DEV
    # This rule should only be placed on your development environment
    # In production, don't include this and don't deploy app_dev.php or config.php
    location ~ ^/(app_dev|config)\.php(/|$) {
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS off;
    }
    # PROD
    location ~ ^/app\.php(/|$) {
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS off;
        # Prevents URIs that include the front controller. This will 404:
        # http://domain.tld/app.php/some-path
        # Remove the internal directive to allow URIs like this
        internal;
    }

    error_log /var/log/nginx/my_error.log;
    access_log /var/log/nginx/my_access.log;
}

With regard to the last two directives above: access_log represents the general requests log, while error_log is for errors, and, as with application error logs, it’s worth setting up extra monitoring to be alerted to problems so you can react quickly.

Note: This is an intentionally oversimplified nginx config file that is provided for example purposes only. It pays almost no attention to security and performance and shouldn’t be used as-is in any “real” environment.

This is what we get in /var/log/nginx/my_access.log after typing http://my.log-sandbox/app_dev.php/ in browser and hitting Enter.

192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /app_dev.php/ HTTP/1.1" 200 6715 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /bundles/framework/css/body.css HTTP/1.1" 200 6657 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /bundles/framework/css/structure.css HTTP/1.1" 200 1191 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /bundles/acmedemo/css/demo.css HTTP/1.1" 200 2204 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /bundles/acmedemo/images/welcome-quick-tour.gif HTTP/1.1" 200 4770 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /bundles/acmedemo/images/welcome-demo.gif HTTP/1.1" 200 4053 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /bundles/acmedemo/images/welcome-configure.gif HTTP/1.1" 200 3530 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:28 +0300] "GET /favicon.ico HTTP/1.1" 200 6518 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"
192.168.56.1 - - [26/Apr/2015:16:13:30 +0300] "GET /app_dev.php/_wdt/e50d73 HTTP/1.1" 200 13265 "http://my.log-sandbox/app_dev.php/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36"

This shows that, for serving one page, the browser actually performs 9 HTTP calls. 7 of those, however, are requests to static content, which are plain and lightweight. However, they still take network resources and this is what can be optimized by using various sprites and minification techniques.

While those optimisations are to be discussed in another article, what’s relavant here is that we can log requests to static contents separately by using another location directive for them:

location ~ \.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js)$ {
    access_log /var/log/nginx/my_access-static.log;
}

Remember that nginx location performs simple regular expression matching, so you can include as many static contents extensions as you expect to dispatch on your site.

Parsing such logs is no different than parsing application logs.

Other Logs Worth Mentioning

Two other PHP logs worth mentioning are the debug log and data storage log.

The Debug Log

Another convenient thing about nginx logs is the debug log. We can turn it on by replacing the error_log line of the config with the following (requires that the nginx debug module be installed):

error_log /var/log/nginx/my_error.log debug;

The same setting applies for Apache or whatever other webserver you use.

And incidentally, debug logs are not related to error logs, even though they are configured in the error_logdirective.

Although the debug log can indeed be verbose (a single nginx request, for example, generated 127KB of log data!), it can still be very useful. Wading through a log file may be cumbersome and tedious, but it can often quickly provide clues and information that greatly help accelerate the debugging process.

In particular, the debug log can really help with debugging nginx configurations, especially the most complicated parts, like location matching and rewrite chains.

Of course, debug logs should never be enabled in a production environment. The amount of space they use also and the amount of information that they store means a lot of I/O load on your server, which can degrade the whole system’s performance significantly.

Data Storage Logs

Another type of server log (useful for debugging) is data storage logs. In MySQL, you can turn them on by adding these lines:

[mysqld]
general_log = 1
general_log_file = /var/log/mysql/query.log

These logs simply contain a list of queries run by the system while serving database requests in chronological order, which can be helpful for various debugging and tracing needs. However, they should not stay enabled on production systems, since they will generate extra unnecessary I/O load, which affects performance.

Writing to Your Log Files

PHP itself provides functions for opening, writing to, and closing log files (openlog(), syslog(), and closelog(), respectively).

There are also numerous logging libraries for the PHP developer, such as Monolog (popular among Symfonyand Laravel users), as well as various framework-specific implementations, such as the logging capabilities incorporated into CakePHP. Generally, libraries like Monolog not only wrap syslog() calls, but also allow using other backend functionality and tools.

Here’s a simple example of how to write to the log:

<?php

openlog(uniqid(), LOG_ODELAY, LOG_LOCAL0);
syslog(LOG_INFO, 'It works!');

Our call here to openlog:

  • configures PHP to prepend a unique identifier to each system log message within the script’s lifetime
  • sets it to delay opening the syslog connection until the first syslog() call has occurred
  • sets LOG_LOCAL0 as the default logging facility

Here’s what the contents of the log file would look like after running the above code:

# cat /var/log/my/info.log
Mar  2 00:23:29 log-sandbox 54f39161a2e55: It works!

Maximizing the Value of Your PHP Log Files

Now that we’re all good with theory and basics, let’s see how much we can get from logs making as few changes as possible to our sample Symfony Standard project.

First, let’s create the scripts src/log-begin.php (to properly open and configure our logs) and src/log-end.php(to log information about successful completion). Note that, for simplicity, we’ll just write all messages to the info log.

# src/log-begin.php

<?php

define('START_TIME', microtime(true));
openlog(uniqid(), LOG_ODELAY, LOG_LOCAL0);
syslog(LOG_INFO, 'BEGIN');
syslog(LOG_INFO, "URI\t{$_SERVER['REQUEST_URI']}");
$browserHash = substr(md5($_SERVER['HTTP_USER_AGENT']), 0, 7);
syslog(LOG_INFO, "CLIENT\t{$_SERVER['REMOTE_ADDR']}\t{$browserHash}"); <br />

# src/log-end.php

<?php

syslog(LOG_INFO, "DISPATCH TIME\t" . round(microtime(true) - START_TIME, 2));
syslog(LOG_INFO, 'END');

And let’s require these scripts in app.php:

<?php

require_once(dirname(__DIR__) . '/src/log-begin.php');
syslog(LOG_INFO, "MODE\tPROD");

# original app.php contents

require_once(dirname(__DIR__) . '/src/log-end.php');

For the development environment, we want to require these scripts in app_dev.php as well. The code to do so would be the same as above, except we would set the MODE to DEV rather than PROD.

We also want to track what controllers are being invoked, so let’s add one more line in Acme\DemoBundle\EventListener\ControllerListener, right at the beginning of the ControllerListener::onKernelController() method:

syslog(LOG_INFO, "CONTROLLER\t" . get_class($event->getController()[0]));

Note that these changes total a mere 15 extra lines of code, but can collectively yield a wealth of information.

Analyzing the Data in Your Log Files

For starters, let’s see how many HTTP requests are required to serve each page load.

Here’s the info in the logs for one request, based on the way we’ve configured our logging:

Mar  3 12:04:20 log-sandbox 54f58724b1ccc: BEGIN
Mar  3 12:04:20 log-sandbox 54f58724b1ccc: URI    /app_dev.php/
Mar  3 12:04:20 log-sandbox 54f58724b1ccc: CLIENT 192.168.56.1    1b101cd
Mar  3 12:04:20 log-sandbox 54f58724b1ccc: MODE   DEV
Mar  3 12:04:23 log-sandbox 54f58724b1ccc: CONTROLLER Acme\DemoBundle\Controller\WelcomeController
Mar  3 12:04:25 log-sandbox 54f58724b1ccc: DISPATCH TIME  4.51
Mar  3 12:04:25 log-sandbox 54f58724b1ccc: END
Mar  3 12:04:25 log-sandbox 54f5872967dea: BEGIN
Mar  3 12:04:25 log-sandbox 54f5872967dea: URI    /app_dev.php/_wdt/59b8b6
Mar  3 12:04:25 log-sandbox 54f5872967dea: CLIENT 192.168.56.1    1b101cd
Mar  3 12:04:25 log-sandbox 54f5872967dea: MODE   DEV
Mar  3 12:04:28 log-sandbox 54f5872967dea: CONTROLLER Symfony\Bundle\WebProfilerBundle\Controller\ProfilerController
Mar  3 12:04:29 log-sandbox 54f5872967dea: DISPATCH TIME  4.17
Mar  3 12:04:29 log-sandbox 54f5872967dea: END

So now we know that each page load is actually served with two HTTP requests.

Actually there are two points worth mentioning here. First, the two requests per page load is for using Symfony in dev mode (which I have done throughout this article). You can identify dev mode calls by searching for /app-dev.php/ URL chunks. Second, let’s say each page load is served with two subsequent requests to the Symfony app. As we saw earlier in the nginx access logs, there are actually more HTTP calls, some of which are for static content.

OK, now let’s surf a bit on the demo site (to build up the data in the log files) and let’s see what else we can learn from these logs.

How many requests were served in total since the beginning of the logfile?

# grep -c BEGIN info.log
10

Did any of them fail (did the script shut down without reaching the end)?

# grep -c END info.log
10

We see that the number of BEGIN and END records match, so this tells us that all of the calls were successful. (If the PHP script had not completed successfully, it would not have reached execution of the src/log-end.phpscript.)

What was the percentage of requests to the site root?

# `grep -cE "\s/app_dev.php/$" info.log`
2

This tells us that there were 2 page loads of the site root. Since we previously learned that (a) there are 2 requests to the app per page load and (b) there were a total of 10 HTTP requests, the percentage of requests to the site root was 40% (i.e., 2×2/10).

Which controller class is responsible for serving requests to site root?

# grep -E "\s/$|\s/app_dev.php/$" info.log | head -n1
Mar  3 12:04:20 log-sandbox 54f58724b1ccc: URI  /app_dev.php/

# grep 54f58724b1ccc info.log | grep CONTROLLER
Mar  3 12:04:23 log-sandbox 54f58724b1ccc: CONTROLLER   Acme\DemoBundle\Controller\WelcomeController

Here we used the unique ID of a request to check all log messages related to that single request. We thereby were able to determine that the controller class responsible for serving requests to site root is Acme\DemoBundle\Controller\WelcomeController.

Which clients with IPs of subnet 192.168.0.0/16 have accessed the site?

# grep CLIENT info.log | cut -d":" -f4 | cut -f2 | sort | uniq
192.168.56.1

As expected in this simple test case, only my host computer has accessed the site. This is of course a very simplistic example, but the capability that it demonstrates (of being able to analyse the sources of the traffic to your site) is obviously quite powerful and important.

How much of the traffic to my site has been from FireFox?

Having 1b101cd as the hash of my Firefox User-Agent, I can answer this question as follows:

# grep -c 1b101cd info.log
8
# grep -c CLIENT info.log
10

Answer: 80% (i.e., 8/10)

What is the percentage of requests that yielded a “slow” response?

For purposes of this example, we’ll define “slow” as taking more than 5 seconds to provide a response. Accordingly:

# grep "DISPATCH TIME" info.log | grep -cE "\s[0-9]{2,}\.|\s[5-9]\."
2

Answer: 20% (i.e., 2/10)

Did anyone ever supply GET parameters?

# grep URI info.log | grep \?

No, Symfony standard uses only URL slugs, so this also tells us here that no one has attempted to hack the site.

These are just a handful of relatively rudimentary examples of the ways in which logs files can be creatively leveraged to yield valuable usage information and even basic analytics.

Other Things to Keep in Mind

Keeping Things Secure

Another heads-up is for security. You might think that logging requests is a good idea, in most cases it indeed is. However, it’s important to be extremely careful about removing any potentially sensitive user information before storing it in the log.

Fighting Log File Bloat

Since log files are text files to which you always append information, they are constantly growing. Since this is a well-known issue, there are some fairly standard approaches to controlling log file growth.

The easiest is to rotate the logs. Rotating logs means:

  • Periodically replacing the log with a new empty file for further writing
  • Storing the old file for history
  • Removing files that have “aged” sufficiently to free up disk space
  • Making sure the application can write to the logs uniterrupted when these file changes occur

The most common solution for this is logrotate, which ships pre-installed with most *nix distributions. Let’s see a simple configuration file for rotating our logs:

/var/log/my/debug.log
/var/log/my/info.log
/var/log/my/warning.log
/var/log/my/error.log
{
    rotate 7
    daily
    missingok
    notifempty
    delaycompress
    compress
    sharedscripts
    postrotate
        invoke-rc.d rsyslog rotate > /dev/null
    endscript
}

Another, more advanced approach is to make rsyslogd itself write messages into files, dynamically created based on current date and time. This would still require a custom solution for removal of older files, but lets devops manage timeframes for each log file precisely. For our example:

$template DynaLocal0Err,        "/var/log/my/error-%$NOW%-%$HOUR%.log"
$template DynaLocal0Info,       "/var/log/my/info-%$NOW%-%$HOUR%.log"
$template DynaLocal0Warning,    "/var/log/my/warning-%$NOW%-%$HOUR%.log"
$template DynaLocal0Debug,      "/var/log/my/debug-%$NOW%-%$HOUR%.log"
local1.err      -?DynaLocal0Err
local1.info     -?DynaLocal0Info
local1.warning  -?DynaLocal0Warning
local1.debug    -?DynaLocal0Debug

This way, rsyslog will create an individual log file each hour, and there won’t be any need for rotating them and restarting the daemon. Here’s how log files older than 5 days can be removed to accomplish this solution:

find /var/log/my/ -mtime +5 -print0 | xargs -0 rm

Remote Logs

As the project grows, parsing information from logs gets more and more resource hungry. This not only means creating extra server load; it also means creating peak load on the CPU and disk drives at the times when you parse logs, which can degrade server response time for users (or in a worst case can even bring the site down).

To solve this, consider setting up a centralized logging server. All you need for this is another box with UDP port 514 (default) open. To make rsyslogd listen to connections, add the following line to its config file:

$UDPServerRun 514

Having this, setting up the client is then as easy as:

*.*   @HOSTNAME:514

(where HOSTNAME is the host name of your remote logging server).

Conclusion

While this article has demonstrated some of the creative ways in which log files can offer way more valuable information than you may have previously imagined, it’s important to emphasize that we’ve only scratched the surface of what’s possible. The extent, scope, and format of what you can log is almost limitless. This means that – if there’s usage or analytics data you want to extract from your logs – you simply need to log it in a way that will make it easy to subsequently parse and analyze. Moreover, that analysis can often be performed with standard Linux command line tools like grep, sed, or awk.

Indeed, PHP log files are a most powerful tool that can be of tremendous benefit.

Resources

Code on GitHub: https://github.com/isanosyan/toptal-blog-logs-post-example


Appendix: Reading and Manipulating Log Files in the Unix Shell

Here is a brief intro to some of the more common *nix command line tools that you’ll want to be familiar with for reading and manipulating your log files.

  • cat is perhaps the most simple one. It prints the whole file to the output stream. For example, the following command will print logfile1 to the console:
    cat logfile1
    
  • > character allows user to redirect output, for example into another file. Opens target stream in write mode (which means wiping target contents). Here’s how we replace contents of tmpfile with contents of logfile1:
    cat logfile1 > tmpfile
    
  • >> redirects output and opens target stream in append mode. Current contents of target file will be preserved, new lines will be added to the bottom. This will append logfile1 contents to tmpfile:
    cat logfile1 >> tmpfile
    
  • grep filters file by some pattern and prints only matching lines. Command below will only print lines of logfile1 containing Bingo message:
    grep Bingo logfile1
    
  • cut prints contents of a single column (by number starting from 1). By default searches for tab characters as delimiters between column. For example, if you have file full of timestamps in format YYYY-MM-DD HH:MM:SS, this will allow you to print only years:
    cut -d"-" -f1 logfile1
    
  • head displays only the first lines of a file
  • tail displays only the last lines of a file
  • sort sorts lines in the output
  • uniq filters out duplicate lines
  • wc counts words (or lines when used with the -l flag)
  • | (i.e., the “pipe” symbol) supplies output from one command as input to the next. Pipe is very convenient for combining commands. For example, here’s how we can find months of 2014 that occur within a set of timestamps:
    grep -E "^2014" logfile1 | cut -d"-" -f2 | sort | uniq
    

Here we first match lines against regular expression “starts with 2014”, then cut months. Finally, we use combination of sort and uniq to print occurrences only once.

This article was originally posted on Toptal

Build Ultra-Modern Web Apps with Angular Material


At the Google I/O Conference back in 2014, Google announced Material Design, their new design language. They have since converted much of their popular applications to adhere to this new spec in an effort to provide a consistent experience. Now they are trying to convince you to follow along as well.

Angular Material: Superheroic Javascript Framework Meets Ultra-Modern Design

What is Material Design?

After a visit to the official Material Design spec, you will immediately get a feeling of ultra-modern minimalism. Basic shapes and flat colors are the theme here. Going through the documentation is quite an experience. I recommend taking a look for yourself, but I will summarize it here.

Goal

The purpose is to create a visual language that synthesizes classic principles of good design with the innovation and possibility of technology and science. Also to develop a single underlying system that allows for a unified experience across various platforms and device sizes.

Principles

Material Design is founded on three principles.

Material Is the Metaphor

Inspired by the study of paper and ink, the material lives in 3D space and is grounded in tactile reality. It gives the illusion of space by using realistic shadows. The paper material must abide by the laws of physics (i.e. two pieces of paper may not travel through each other), but may supercede the physical world (i.e. a paper may grow or shrink).

Bold, Graphic, Intentional

Deliberate color choices, edge-to-edge imagery, large-scale typography, and intentional white space create a bold and graphic interface that immerse the user in the experience. The Floating Action Button, or FAB, is a prime example of this principle. Have you noticed that little circle with the ‘plus’ symbol floating around in your Google Inbox app? Material Design makes it very apparent that this is an important button.

Motion Provides Meaning

Motion is meaningful and appropriate, serving to focus attention and maintain continuity. Feedback is subtle yet clear. Transitions are efficient yet coherent. The main point here is to animate only when it has a purpose and not to overdo it.

How does AngularJS fit into Material Design?

AngularJS, Google’s “Superheroic JavaScript MVW Framework”, addresses many of the challenges encountered in developing single-page applications (SPA). It provides the framework needed for creating modern web applications that connect to APIs and never need the page to be refreshed.

AngularJS: A New Approach

Angular is what HTML would have been, had it been designed for applications. HTML is a great declarative language for static documents, but creating dynamic applications not so much.

Creating dynamic applications with HTML has always been an exercise in tricking the browser into doing things it wasn’t meant to do. There are a couple of approaches to doing this.

  1. Library – a collection of functions. (jQuery)
  2. Framework – code dynamically fills in static elements when needed. (Durandal, Ember)

Angular takes a different approach to solve this problem. Instead of struggling with the HTML it is given, it creates new HTML constructs. Angular teaches the browser new HTML syntax through a construct called ‘directives’. Angular comes with a set of these directives built-in, but also allows you to create custom directives, so it allows you to write your own HTML elements.

Wouldn’t it be neat if Google created a set of directives based on Material Design principles?

Introducing Angular Material

Google is actively developing Angular Material, an implementation of Material Design in AngularJS. Angular Material provides a set of reusable UI components based on the Material Design system. Angular Material is composed of several pieces. It has a CSS library for typography and other elements, it provides an interesting JavaScript approach for theming, and its responsive layout uses a flex grid. But the most appealing feature of Angular Material is its amazing collection of directives.

Getting Started

I have created an open source project to help jumpstart your next Angular Material project. The purpose of this project is to give an example of everything Angular Material has to offer, all under one roof. Navigation, paging, theming, and the entire collection of directives are ready to go, all you have to do is feed in your data and bind it to the HTML.

Take a look at the demo here or fork the code on GitHub.

Directives

Directives are a core Angular feature. Angular comes with several directives that you use all of the time like ng-model or ng-repeat. They are a very important piece of Angular that makes the framework function as it should.

How to Use an Angular Material Directive

Angular Material extends this directive library with a set of beautiful Material Design inspired directives. Angular Material directives are HTML tags that begin with ‘md’; short for Material Design. They couldn’t be much easier to use. For example, let’s take a look at the good old button.

A standard HTML button might look something like this.

<button>Click Me</button>

An Angular Material button looks like this.

<md-button>Click Me</md-button>

And this is all that is needed to make a Material button. Now, there are several other options that are available for this directive such as theming it and raising it from the surface to imply importance.

<md-button class="md-raised md-primary md-hue-1">Click Me</md-button>

Services

Services are also core to Angular functionality. They are used to share code across the application. A common core service like $http is used and reused for data calls in Angular applications.

Angular services are:

  1. Lazily instantiated – Angular only instantiates a service when an application component depends on it.
  2. Singletons – Each component dependent on a service gets a reference to the single instance generated by the service factory.

How to Use an Angular Material Service

Angular Material comes packaged with some services that provide some extra functionality to the application. They also contribute to the performance of some of the directives. A great example of a service is the ‘toast’.

A toast is a small notification that slides in from the top of the screen and goes away after a few seconds. Using this service is easy.

In JavaScript,

$mdToast.show(
      $mdToast.simple('Simple Toast!')
        .position('left bottom')
        .hideDelay(3000)
    );

This example shows a simple toast that pops up on the bottom left of the screen and retreats after 3 seconds.

Some services can be personalized with custom templates. In this case, the $mdToast service can use a custom HTML template by using the md-toast directive.

Theming

Material Design is a visual language where themes convey meaning through color, tones, and contrast. These themes are expressed throughout the components in the entire application to provide a more unified feel.

According to the Material Design guidelines, you must “limit your selection of colors by choosing three color hues from the primary palette and one accent color from the secondary palette.” Angular Material makes following this guideline simple by using JavaScript to configure the theme. But first, what is a palette and a hue?

  • Hue: A hue is a single color in a palette.
  • Palette: A palette is a collection of hues.

For example, a palette would be ‘green’ and a hue is a particular shade of green. Angular Material comes packaged with all of the valid palettes from the Material Design spec. You can learn about more about the valid color palettes here.

Theming your project is a piece of cake. In the app.js file, set your desired palettes and hues using the Theming Provider service.

angular.module('myApp', ['ngMaterial'])
.config(function($mdThemingProvider) {
  $mdThemingProvider.theme('default')
    .primaryPalette(‘cyan’, {
      'default': '400',
      'hue-1': '100',
      'hue-2': '600',
      'hue-3': 'A100'
    })
    .accentPalette('amber')
    .warnPalette('red')
    .backgroundPalette('grey');
});

Using the Theme

To apply the theme to the components, set the class of the element to the desired palette and hue.

<md-button class="md-primary">Click me</md-button>
<md-button class="md-primary md-hue-1">Click me</md-button>
<md-button class="md-primary md-hue-2">Click me</md-button>
<md-button class="md-accent">or maybe me</md-button>
<md-button class="md-warn">Careful</md-button>

Layout

Flexbox is the latest and greatest addition to responsive design and Angular Material comes packaged with it. If you are familiar with the Bootstrap grid system, then you should be able to catch on quickly. In fact, Bootstrap is switching to Flexbox in its upcoming release. It has the familiar rows and columns layout you have become accustomed to, but with much more. Learn how to use Flexbox withthis tutorial or study theofficial documentation.

Top 9 Best Angular Material Directives

There are too many Angular Material directives to list them all, so I would like to share with you my favorites.

9. Progress Linear

Often in SPAs, pages need time to load data from the server. If the application shows a blank page during this time, users may think the application is broken and will leave. Let users know the data is loading with theProgress Linear directive. Users will know to wait when they see an animated progress bar indicating that something is happening. Alternatively, use the Progress Circular directive for a round indicator.

8. Date Picker

The Date Picker directive makes choosing a date a clean, simple experience for the user and a true one-liner to write. Simply use md-datepicker and optionally confine the range with md-min-date and md-max-date and that’s it.

7. Autocomplete

Autocomplete provides a pleasant user experience by helping the user choose an option. It is what makes Google’s search engine the best. The Autocomplete directive adds this functionality to your application by completing a user’s words as they type. But the best part about this directive is customization. By filling your autocomplete with md-item-template you can give more meaning to the suggestions. For instance, if a user was searching for names in a company, the autocomplete could show the matching names with their picture and company role, giving a more robust user experience.

6. Bottom Sheet

The bottom sheet is a little menu that slides up from the bottom of your screen, covering content and taking focus. Originally intended to be used solely for mobile devices, the bottom sheet has been gaining popularity on larger screens. To use it, create a template with md-bottom-sheet containing either an md-grid or an md-list for a grid layout or list layout, respectively. Then call it with the Bottom Sheet service, $mdBottomSheet.show().

5. Input

Input forms are boring and have been since the beginning of the internet. But they don’t have to be! Give yourinputs some flair with the Input directive. Wrap your input tag with md-input-container and watch it come to life. Watch as your placeholder animates into a floating label. Easily validate your input with instant, but subtle, color changes and warning messages. Input directive takes an element that is expected to be boring and delivers a pleasant surprise.

4. Toast

The most aggravating user experience is not knowing what the application is doing. We ease this aggravation with toasters, or little unobtrusive notifications. In the olden days, when we sent a request to the server we waited on that page until the response came back before we could move on. User attention span has dropped drastically since then. In today’s SPAs, we click a button and expect to move along immediately, dealing with the server response when it comes. The Toast directive makes this a piece of cake. A toaster is summoned by simply using the Toast Service, $mdToast.show(), and setting the text, duration, and which corner to appear in. Make your own custom toaster with md-toast.

3. Grid List

Are your lists lacking pizazz? Grid lists are an alternative to standard list views. A grid list is best for presenting images, and is optimized for visual comprehension. It works by laying different sized tiles on a grid, giving a scattered, eclectic feel. The tile size and layout then respond to the screen size. This directive is sure to give your application an exciting and fun look.

2. Whiteframe

The concept of space is the core of Material Design and its paper metaphor. Two sheets of paper in the same z-position (or depth), form a seam and must move together. Two overlapping sheets of paper, with different z-positions, form a step. They move independently of each other. To follow the design, we must be able to shift elements along the z-axis. Angular Material provides a simple way to do this. Using the Whiteframe directive, set the class to md-whiteframe-z{x}, where x is the units of depth up from the background. The larger the number, the larger the shadow cast by the paper.

1. Sidenav

Creating a side navigation menu has never been easier. The Sidenav directive places a navigation menu on either the left or right of the screen. Keeping mobile in mind, it swipes in and out as expected, or programmatically with a button click. A nice addition is the lock open feature. The side navigation can be set to lock open when the screen reaches a certain size. By setting the parameter md-is-locked-open=”$mdMedia(‘gt-sm’)” the menu will be tucked away on the phone but locked open on tablet and larger.

Conclusion

Google is converting their most popular applications to Material Design. Now they are heading the development of Angular Material, an implementation of Material Design written in AngularJS. Material Design uses a paper metaphor, bold intentions, and meaningful motion. AngularJS organizes single page applications. Angular Material applies Material Design principles to AngularJS applications.

Material Design is here and Angular Material is a fantastic way to apply the Material design spec to your single page applications. If you want to create your own Angular Material application, don’t waste your time starting from scratch. Rather, start off with a fully functioning app with demos of the directives, theming already set up, and navigation and routing ready to go. Take a look at the demohere or fork the code on GitHub. Of course, you can also learn all about Angular Material by visiting the official documentation.

What do you think about my picks for the best Angular Material directives? Did I get them right? What are your favorites?


This post originally appeared on toptal.com

10 Essential WordPress Interview Questions


  1. Consider the following code snippet. Briefly explain what changes it will achieve, who can and cannot view its effects, and at what URL WordPress will make it available.
add_action('admin_menu', 'custom_menu');

function custom_menu(){
    add_menu_page('Custom Menu', 'Custom Menu', 'manage_options', 'custom-menu-slug', 'custom_menu_page_display');
}

function custom_menu_page_display(){
    echo '<h1>Hello World</h1>';
    echo '<p>This is a custom page</p>';
}

With default settings and roles, admins can view it and all lower roles can’t. In fact this menu item will only be visible to users who have the privilege to “manage options” or change settings from WordPress admin dashboard.

The admin custom page will be made available at this (relative) URL: “?page=custom-menu-slug”.

2. How would you change all the occurrences of “Hello” into “Good Morning” in post/page contents, when viewed before 11AM?

In a plugin or in theme functions file, we must create a function that takes text as input, changes it as needed, and returns it. This function must be added as a filter for “the_content”.

It’s important that we put a little effort to address some details:

  • Only change when we have the full isolate substring “hello”. This will prevent words like “Schellong” from becoming “sgood morningng”. To do that we must use “word boundary” anchors in regular expression, putting the word between a pair of “\b”.
  • Keep consistency with the letter case. An easy way to do that is to make the replace case sensitive.
<?php
function replace_hello($the_content){
    if(current_time('G')<=10){
        $the_content=preg_replace('/\bhello\b/','good morning',$the_content);
        $the_content=preg_replace('/\bHello\b/','Good Morning',$the_content);
    }
    return $the_content;
}
add_filter('the_content', 'replace_hello');

3. What is the $wpdb variable in WordPress, and how can you use it to improve the following code?

<?php
function perform_database_action(){
    mysql_query(“INSERT into table_name (col1, col2, col3) VALUES ('$value1','$value2', '$value3');
}

$wpdb is a global variable that contains the WordPress database object. It can be used to perform custom database actions on the WordPress database. It provides the safest means for interacting with the WordPress database.

The code above doesn’t follow WordPress best practices which strongly discourages the use of any mysql_query call. WordPress provides easier and safer solutions through $wpdb.

The above code can be modified to be as follows:

<?php
function perform_database_action(){
    global $wpdb;
    $data= array('col1'=>$value1,'col2'=>$value2,'col3'=>$value3);
    $format = array('%s','%s','%s');
    $wpdb->insert('table_name', $data, $format);
}
add_custom_script();
function add_custom_script(){
    wp_enqueue_script( 
        'jquery-custom-script',
        plugin_dir_url( __FILE__ ).'js/jquery-custom-script.js'
    );
}

wp_enqueue_script is usually used to inject javascript files in HTML.

The script we are trying to queue will not be added, because “add_custom_script()” is called with no hooks. To make this work properly we must use the wp_enqueue_scripts hook. Some other hooks will also work such as init, wp_print_scripts, and wp_head.

Furthermore, since the script seems to be dependent on jQuery, it’s recommended to declare it as such by adding array(‘jquery’) as the 3rd parameter.

Proper use:

add_action(‘wp_enqueue_scripts’, ‘add_custom_script’);
function add_custom_script(){
    wp_enqueue_script( 
        'jquery-custom-script',
        plugin_dir_url( __FILE__ ).'js/jquery-custom-script.js',
        array( 'jquery')
    );
}

5. Assuming we have a file named “wp-content/plugins/hello-world.php” with the following content. What is this missing to be called a plugin and run properly?

<?php
add_filter('the_content', 'hello_world');
function hello_world($content){
    return $content . "<h1> Hello World </h1>";
}

The file is missing the plugin headers. Every plugin should include at least the plugin name in the header with the following syntax:

<?php
/*
Plugin Name: My hello world plugin
*/

6. What is a potential problem in the following snippet of code from a WordPress theme file named “footer.php”?

...
        </section><!—end of body section- ->
        <footer>All rights reserved</footer>
    </body>
</html>

All footer files must call the <?php wp_footer() ?> function, ideally right before the </body> tag. This will insert references to all scripts and stylesheets that have been added by plugins, themes, and WordPress itself to the footer.

7. What is this code for? How can the end user use it?

function new_shortcode($atts, $content = null) {
    extract(shortcode_atts(array(
        “type” => “warning”
    ), $atts));
    return '
'.$content.'
'; } add_shortcode(“warning_box”, “new_shortcode”);

This shortcode allows authors to show an info box in posts or pages where the shortcode itself is added. The HTML code generated is a div with a class name “alert” plus an extra class name by default, “alert-warning”. A parameter can change this second class to change the visual aspect of the alert box.

Those class naming structures are compatible with Bootstrap.

To use this shortcode, the user has to insert the following code within the body of a post or a page:

[warning_box]Warning message[/warning_box]

8. Is WordPress safe from brute force login attempts? If not, how can you prevent such an attack vector?

No, WordPress on its own is vulnerable to brute force login attempts.

Some good examples of actions performed to protect a WordPress installation against brute force are:

  • Do not use the “admin” username, and use strong passwords.
  • Password protect “wp-login.php”.
  • Set up some server-side protections (IP-based restrictions, firewall, Apache/Nginx modules, etc.)
  • Install a plugin to add a captcha, or limit login attempts.

9. The following line is in a function inside a theme’s “function.php” file. What is wrong with this line of code?

wp_enqueue_script('custom-script', '/js/functions.js');

Assuming that “functions.js” file is in the theme’s “js/” folder, we should use ‘get_template_directory_uri()’. '/js/functions.js' or the visitors’ browser will look for the file in the root directory of the website.

10. Suppose you have a non-WordPress PHP website with a WordPress instance in the “/blog/” folder. How can you show a list of the last 3 posts in your non-WordPress pages?

One obvious way is to download, parse, and cache the blog’s RSS feeds. However, since the blog and the website are on the same server, you can use all the WordPress power, even outside it.

The first thing to do is to include the “wp-load.php” file. After which you will be able to perform any WP_Query and use any WordPress function such as get_posts, wp_get_recent_posts, query_posts, and so on.

<?php
    require_once('../blog/wp-load.php');
?>
<h2>Recent Posts</h2>
<ul>
<?php
    $recent_posts = wp_get_recent_posts(array(‘numberposts’=>3));
    foreach($recent_posts as $recent){
        echo '<li><a href="' . get_permalink($recent["ID"]) . '">' . $recent["post_title"] . '</a></li> ';
    }
?>
</ul>

Source: Toptal

A New Way of Using Email for Support Apps: An AWS Tutorial


Email may not be as cool as other communication platforms but working with it can still be fun. I was recently tasked with implementing messaging in a mobile app. The only catch was that the actual communication needed to be over email. We wanted app users to be able to communicate with a support team just like you would send a text message. Support team members needed to receive these messages via email, and also needed to be able to respond to the originating user. To the end user, everything needed to look and function just like any other modern messaging app.

In this article, we will take a look at how to implement a service similar to the one described above using Java and a handful of Amazon’s web services. You will need a valid AWS account, a domain name, and access to your favorite Java IDE.

The Infrastructure

Before we write any code, we’re going to set up the required AWS services for routing and consuming email. We’re going to use SES for sending and consuming emails and SNS+SQS for routing incoming messages.

Consuming Email Programmatically Using AWS

Revitalize e-mail in support applications with Amazon SES.

It all starts here with SES. Start by logging into your AWS account and navigating to the SES console.

Before we begin, you’re going to need a verified domain name you can send emails from.

This will be the domain app users will be sending email messages from and support members will be replying to. Verifying a domain with SES is a straightforward process, and more info can be found here.

If this is the first time you are using SES, or you have not requested a sending limit, your account will be sandboxed. This means that you will not be able to send email to addresses that aren’t verified with AWS. This may cause an error later in the tutorial, when we send an email to our fictional help desk. To avoid this, you can verify whatever email address you plan on using as your help desk in the SES console in the Email Addresses tab.

Once you have a verified domain, we can create a rule set. Navigate to the Rule Sets tab in the SES console and create a new Receipt Rule.

The first step when creating a receipt rule will be defining a recipient.

Recipients filters will allow you to define what emails SES will consume, and how to process each incoming message. The recipient we define here needs to match the domain and address pattern app user messages are emailed from. The simplest case here would be to add a recipient for the domain we previously verified, in our case example.com. This will configure SES to apply our rule to all emails sent to example.com. (e.g. foo@example.com, bar@example.com).

To create a rule for our entire domain, we would add a recipient for example.com.

It’s also possible to match address patterns. This is useful if you want to route incoming messages to different SQS queues.

Say that we have queue A and queue B. We could add two recipients: a@example.com and b@example.com. If we want to insert a message into queue A, we would email a+foo@example.com. The a part of this will match our a@example.com recipient. Everything between the + and @ is arbitrary user data, it will not affect SES’s address matching. To insert into queue B, simply replace a with b.

After you define your recipients, the next step is to configure the action SES will perform after consuming a new email. We eventually want these to end up in SQS, however it is currently not possible to go directly from SES to SQS. To bridge the gap, we need to use SNS. Select the SNS action and create a new topic. We will eventually configure this topic to insert messages into SQS.

Select create SNS topic and give it a name.

After the topic is created, we need to select a message encoding. I’m going to use Base64 in order to preserve special characters. The encoding you choose will affect how messages are decoded when we consume them in our service.

Once the rule is set, we just need to name it.

The next step will be configuring SQS and SNS, for that we need to head over to the SQS console and create a new queue.

To keep things simple, I’m using the same name as our SNS topic.

After we define our queue, we’re going to need to adjust its access policy. We only want to grant our SNS topic permission to insert. We can achieve this by adding a condition that matches our SNS topic arn.

The value field should be populated with the ARN for the SNS topic SES is notifying.

After SQS is set up, it’s time for one more trip back to the SNS console to configure your topic to insert notifications into your shiny new SQS queue.

In the SNS console, select the topic SES is notifying. From there, create a new subscription. The subscription protocol should be Amazon SQS, and the destination should be the ARN of the SQS queue you just generated.

After all that, the AWS side of the equation should be all set up. We can test our work by emailing ourselves. Send an email to the domain configured with SES, then head to the SQS console and select your queue. You should be able to see the payload containing your email.

Java Service to Deal with Emails

Now on to the fun part! In this section, we’re going to create a simple microservice capable of sending messages and processing incoming emails. The first step will be defining an API that will email our support desk on behalf of a user.

A quick note. We’re going to focus on the business logic components of this service, and won’t be defining REST endpoints or a persistence layer.

To build a Spring service, we’re going to use Spring Boot and Maven. We can use Spring Initializer to generate a project for us, start.spring.io.

To start, our pom.xml should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>com.toptal.tutorials</groupId>
	<artifactId>email-processor</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>jar</packaging>

	<name>email-processor</name>
	<description>A simple "micro-service" for emailing support on behalf of a user and processing replies</description>

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.3.5.RELEASE</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<java.version>1.8</java.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>

	
	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

Emailing Support on Behalf of a User

First, let’s define a bean for emailing our support desk on behalf of a user. The job of this bean will be to process an incoming message from a user ID, and email that message to our pre-defined support desk email address.

Let’s start by defining an interface.

public interface SupportBean {

    /**
     * Send a message to the application support desk on behalf of a user
     * @param fromUserId The ID of the originating user
     * @param message The message to send
     */
    void messageSupport(long fromUserId, String message);
}

And an empty implementation:

@Component
public class SupportBeanSesImpl implements SupportBean {

    /**
     * Email address for our application help-desk
     * This is the destination address user support emails will be sent to
     */
    private static final String SUPPORT_EMAIL_ADDRESS = "support@example.com";

    @Override
    public void messageSupport(long fromUserId, String message) {
        //todo: send an email to our support address
    }
}

Let’s also add the AWS SDK to our pom, we’re going to use the SES client to send our emails:

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-java-sdk</artifactId>
	<version>1.11.5</version>
</dependency>

The first thing we need to do is generate an email address to send our user’s message from. The address we generate will play a critical role on the consuming side of our service. It needs to contain enough information to route the help desk’s reply back to the originating user.

To achieve this, we’re going to include the originating user ID in our generated email address. To keep things clean, we’re going to create an object containing the user ID and use the Base64 encoded JSON string of it as the email address.

Let’s create a new bean responsible for turning a user ID into an email address.

public interface UserEmailBean {

    /**
     * Returns a unique per user email address
     * @param userID Input user ID
     * @return An email address unique for the input userID
     */
    String emailAddressForUserID(long userID);
}

Let’s start our implementation by adding the required consents and a simple inner class that will help us serialize our JSON.

@Component
public class UserEmailBeanJSONImpl implements UserEmailBean {

    /**
     * The TLD for all our generated email addresses
     */
    private static final String EMAIL_DOMAIN = "example.com";

    /**
     * com.fasterxml.jackson.databind.ObjectMapper used to create a JSON object including our user ID
     */
    private final ObjectMapper objectMapper = new ObjectMapper();

    @Override
    public String emailAddressForUserID(long userID) {
//todo: create the email address
        return null;
    }

    /**
     * Simple helper class we will serialize.
     * The JSON representation of this class will become our user email address
     */
    private static class UserDetails{
        private Long userID;

        public Long getUserID() {
            return userID;
        }

        public void setUserID(Long userID) {
            this.userID = userID;
        }
    }
}

Generating our email address is straightforward, all we need to do is create a UserDetails object and Base64 encode the JSON representation. The finished version of our createAddressForUserID method should look something like this:

   @Override
    public String emailAddressForUserID(long userID) {
        UserDetails userDetails = new UserDetails();
        userDetails.setUserID(userID);
        //create a JSON representation.
        String jsonString = objectMapper.writeValueAsString(userDetails);
        //Base64 encode it
        String base64String = Base64.getEncoder().encodeToString(jsonString.getBytes());
        //create an email address out of it
        String emailAddress = base64String + "@" + EMAIL_DOMAIN;
        return emailAddress;
    }

Now we can head back to SupportBeanSesImpl and update it to use the new email bean we just created.

private final UserEmailBean userEmailBean;

@Autowired
public SupportBeanSesImpl(UserEmailBean userEmailBean) {
        this.userEmailBean = userEmailBean;
}


@Override
public void messageSupport(long fromUserId, String message) throws JsonProcessingException {
        //user specific email
        String fromEmail = userEmailBean.emailAddressForUserID(fromUserId);
}

To send emails, we’re going to use the AWS SES client included with the AWS SDK.

  /**
     * SES client
     */
    private final AmazonSimpleEmailService amazonSimpleEmailService = new AmazonSimpleEmailServiceClient(
            new DefaultAWSCredentialsProviderChain() //see http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html
    );

We’re utilizing the DefaultAWSCredentialsProviderChain to manage credentials for us, this class will search for AWS credentials as defined here.

We’re going to an AWS access key provisioned with access to SES and eventually SQS. For more info check out the documentation from Amazon.

The next step will be updating our messageSupport method to email support using the AWS SDK. The SES SDK makes this a straightforward process. The finished method should look something like this:

@Override
public void messageSupport(long fromUserId, String message) throws JsonProcessingException {
        //User specific email
        String fromEmail = userEmailBean.emailAddressForUserID(fromUserId);

        //create the email 
        Message supportMessage = new Message(
                new Content("New support request from userID " + fromUserId), //Email subject
                new Body().withText(new Content(message)) //Email body, this contains the user’s message
        );
        
        //create the send request
        SendEmailRequest supportEmailRequest = new SendEmailRequest(
                fromEmail, //From address, our user's generated email
                new Destination(Collections.singletonList(SUPPORT_EMAIL_ADDRESS)), //to address, our support email address
                supportMessage //Email body defined above
        );
        
        //Send it off
        amazonSimpleEmailService.sendEmail(supportEmailRequest);
    }

To try it out, create a test class and inject the SupportBean. Make sure SUPPORT_EMAIL_ADDRESS defined in SupportBeanSesImpl points to an email address you own. If your SES account is sandboxed, this address also needs to be verified. Email addresses can be verified in the SES console under Email Addresses section.

@Test
public void emailSupport() throws JsonProcessingException {
	supportBean.messageSupport(1, "Hello World!");
}

After running this, you should see a message show up in your inbox. Better yet, reply to the message and check the SQS queue we set up earlier. You should see a payload containing your reply.

Consuming Replies from SQS

The last step will be to read in emails from SQS, parse out the email message, and figure out what user ID the reply should be forwarded belongs to.

Message queueing services like Amazon SQS play a vital role in service-oriented architecture by allowing services to communicate with each other without having to compromise speed, reliability or scalability.

To listen for new SQS messages, we’re going to use the Spring Cloud AWS messaging SDK. This will allow us to configure a SQS message listener via annotations, and thus avoid quite a bit of boilerplate code.

First, the required dependencies.

Add the Spring Cloud messaging dependency:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-aws-messaging</artifactId>
</dependency>

And add Spring Cloud AWS to your pom dependency management:

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-aws</artifactId>
			<version>1.1.0.RELEASE</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Currently, Spring Cloud AWS doesn’t support annotation driven configuration, so we’re going to have to define an XML bean. Luckily we don’t need much configuration at all, so our bean definition will be pretty light. The main point of this file will be to enable annotation driven queue listeners, this will allow us to annotate a method as an SqsListener.

Create a new XML file named aws-config.xml in your resources folder. Our definition should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aws-context="http://www.springframework.org/schema/cloud/aws/context"
       xmlns:aws-messaging="http://www.springframework.org/schema/cloud/aws/messaging"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/cloud/aws/context
       http://www.springframework.org/schema/cloud/aws/context/spring-cloud-aws-context.xsd
       http://www.springframework.org/schema/cloud/aws/messaging
     http://www.springframework.org/schema/cloud/aws/messaging/spring-cloud-aws-messaging.xsd">

    <!--enable annotation driven queue listeners -->
    <aws-messaging:annotation-driven-queue-listener />
    <!--define our region, this lets us reference queues by name instead of by URL. -->
    <aws-context:context-region region="us-east-1" />

</beans>

The important part of this file is <aws-messaging:annotation-driven-queue-listener />. We are also defining a default region. This is not necessary, but doing so will allow us to reference our SQS queue by name instead of URL. We are not defining any AWS credentials, by omitting them Spring will default to DefaultAWSCredentialsProviderChain, the same provider we used earlier in our SES bean. More info can be found in the Spring Cloud AWS docs.

To use this XML config in our Spring Boot app, we need to explicitly import it. Head over to your @SpringBootApplication class and import it.

@SpringBootApplication
@ImportResource("classpath:aws-config.xml") //Explicit import for our AWS XML bean definition
public class EmailProcessorApplication {

	public static void main(String[] args) {
		SpringApplication.run(EmailProcessorApplication.class, args);
	}
}

Now let’s define a bean that will handle incoming SQS messages. Spring Cloud AWS lets us accomplish this with a single annotation!

/**
 * Bean reasonable for polling SQS and processing new emails
 */
@Component
public class EmailSqsListener {

    @SuppressWarnings("unused") //IntelliJ isn't quite smart enough to recognize methods marked with @SqsListener yet
    @SqsListener(value = "com-example-ses", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)   //Mark this method as a SQS listener
                                                                                                    //Since we already set up our region we can use the logical queue name here
                                                                                                    //Spring will automatically delete messages if this method executes successfully
    public void consumeSqsMessage(@Headers Map<String, String> headers, //Map of headers returned when requesting a message from SQS
                                                                        //This map will include things like the relieved time, count and message ID
                                  @NotificationMessage String rawJsonMessage   //JSON string representation of our payload
                                                                            //Spring Cloud AWS supports marshaling here as well
                                                                            //For the sake of simplicity we will work with incoming messages as a JSON object
    ) throws Exception{

        //com.amazonaws.util.json.JSONObject included with the AWS SDK
        JSONObject jsonSqsMessage = new JSONObject(rawJsonMessage);

    }
}

The magic here lies with the @SqsListener annotation. With this, Spring will set up an Executor and start polling SQS for us. Every time a new message is found, our annotated method will be invoked with the message contents. Optionally, Spring Cloud can be configured to marshall incoming messages, giving you the ability to work with strong typed objects inside your queue listener. Additionally, you have the ability to inject a single header or a map of all headers returned from the underlying AWS call.

We’re able to use the logical queue name here since we previously defined the region in aws-config.xml, if we wanted to omit that we would be able to replace the value with our fully qualified SQS URL. We’re also defining a deletion policy, this will configure Spring to delete the incoming message from SQS if its condition is met. There are multiple policies defined in SqsMessageDeletionPolicy, we’re configuring Spring to delete our message if our consumeSqsMessage method executes successfully.

We’re also injecting the returned SQS headers into our method using @Headers, and the injected map will contain metadata related to the queue and payload received. The message body is injected using @NotificationMessage. Spring supports marshalling utilizing Jackson, or via a custom message body converter. For the sake of convenience, we’re just going to inject the raw JSON string and work with it using the JSONObject class included with the AWS SDK.

The payload retrieved from SQS will contain a lot of data. Take a look at the JSONObject to familiarize yourself with the payload returned. Our payload contains data from every AWS service it was passed through, SES, SNS, and finally SQS. For the sake of this tutorial, we really only care about two things: the list of email addresses this was sent to and the email body. Let’s start by parsing out the emails.

//Pull out the array containing all email addresses this was sent to
JSONArray emailAddressArray = jsonSqsMessage.getJSONObject("mail").getJSONArray("destination");
for(int i = 0 ; i < emailAddressArray.length() ; i++){
	String emailAddress = emailAddressArray.getString(i);
}

Since in the real world, our helpdesk may include more than just the original sender in his or her reply, we’re going to want to verify the address before we parse out the user ID. This will give our support desk both the ability to message multiple users at the same time as well as the ability to include non app users .

Let’s head back over to our UserEmailBean interface and add another method.

/**
 * Returns true if the input email address matches our template
 * @param emailAddress Email to check
 * @return true if it matches
 */
boolean emailMatchesUserFormat(String emailAddress);

In UserEmailBeanJSONImpl, to implement this method we’re going to want to do two things. First, check if the address ends with our EMAIL_DOMAIN, then check if we can marshall it.

   @Override
    public boolean emailMatchesUserFormat(String emailAddress) {

        //not our address, return right away
        if(!emailAddress.endsWith("@" + EMAIL_DOMAIN)){
            return false;
        }
        //We just care about the email part, not the domain part
        String emailPart = splitEmail(emailAddress);
        try {
            //Attempt to decode our email
            UserDetails userDetails = objectMapper.readValue(Base64.getDecoder().decode(emailPart), UserDetails.class);
            //We assume this email matches if the address is successfully decoded and marshaled  
            return userDetails != null && userDetails.getUserID() != null;
        } catch (IllegalArgumentException | IOException e) {
            //The Base64 decoder will throw an IllegalArgumentException it the input string is not Base64 formatted
            //Jackson will throw an IOException if it can't read the string into the UserDetails class
            return false;
        }
    }
    /**
     * Splits an email address on @
     * Returns everything before the @
     * @param emailAddress Address to split
     * @return all parts before @. If no @ is found, the entire address will be returned
     */
    private static String splitEmail(String emailAddress){
        if(!emailAddress.contains("@")){
            return emailAddress;
        }
        return emailAddress.substring(0, emailAddress.indexOf("@"));
    }

We defined two new methods, emailMatchesUserFormat which we just added to our interface, and a simple utility method for splitting an email address on the @. Our emailMatchesUserFormat implementation works by attempting to Base64 decode and marshall the address part back into our UserDetails helper class. If this succeeds, we then check to make sure the required userID is populated. If all this works out, we can safely assume a match.

Head back to our EmailSqsListener and inject the freshly updated UserEmailBean.

   private final UserEmailBean userEmailBean;

    @Autowired
    public EmailSqsListener(UserEmailBean userEmailBean) {
        this.userEmailBean = userEmailBean;
    }

Now we’re going to update the consumeSqsMethod. First let’s parse out the email body:

 //Pull our content, remember the content will be Base64 encoded as per our SES settings
        String encodedContent = jsonSqsMessage.getString("content");
        //Create a new String after decoding our body
        String decodedBody = new String(
                Base64.getDecoder().decode(encodedContent.getBytes())      
        );
        for(int i = 0 ; i < emailAddressArray.length() ; i++){
            String emailAddress = emailAddressArray.getString(i);
        }

Now let’s create a new method that will process the email address and email body.

private void processEmail(String emailAddress, String emailBody){
        
}

And finally, update the email loop to invoke this method if it finds a match.

//Loop over all sent to addresses
for(int i = 0 ; i < emailAddressArray.length() ; i++){
    String emailAddress = emailAddressArray.getString(i);
    //If we find a match, process the email and method
    if(userEmailBean.emailMatchesUserFormat(emailAddress)){
        processEmail(emailAddress, decodedBody);
    }
}

Before we implement processEmail, we need to add one more method to our UserEmailBean. We need a method for returning the userID from an email. Head back over to the UserEmailBean interface to add its last method.

   /**
     * Returns the userID from a formatted email address.
     * Returns null if no userID is found. 
     * @param emailAddress Formatted email address, this address should be verified using {@link #emailMatchesUserFormat(String)}
     * @return The originating userID if found, null if not
     */
    Long userIDFromEmail(String emailAddress);

The goal of this method will be to return the userID from a formatted address. The implementation will be similar to our verification method. Let’s head over to UserEmailBeanJSONImpl and fill in this method.

   @Override
    public Long userIDFromEmail(String emailAddress) {
        String emailPart = splitEmail(emailAddress);
        try {
            //Attempt to decode our email
            UserDetails userDetails = objectMapper.readValue(Base64.getDecoder().decode(emailPart), UserDetails.class);
            if(userDetails == null || userDetails.getUserID() == null){
                //We couldn't find a userID
                return null;
            }
            //ID found, return it
            return userDetails.getUserID();
        } catch (IllegalArgumentException | IOException e) {
            //The Base64 decoder will throw an IllegalArgumentException it the input string is not Base64 formatted
            //Jackson will throw an IOException if it can't read the string into the UserDetails class
            //Return null since we didn't find a userID
            return null;
        }
    }

Now head back over to our EmailSqsListener and update processEmail to use this new method.

private void processEmail(String emailAddress, String emailBody){
    //Parse out the email address
    Long userID = userEmailBean.userIDFromEmail(emailAddress);
    if(userID == null){
        //Whoops, we couldn't find a userID. Abort!
        return;
    }
}

Great! Now we have almost everything we need. The last thing we need to do is parse out the reply from the raw message.

Email clients, just like web browsers from a few years ago, are plagued by the inconsistencies in their implementations.

Parsing out replies from emails is actually a fairly complicated task. Email message formats are not standardized, and the variations between different email clients can be huge. The raw response is also going to include much more than the reply and a signature. The original message will most likely be included as well. Smart people over at Mailgun put together a great blog post explaining some of the challenges. They also open sourced their machine-learning based approach to parsing emails, check it out here.

The Mailgun library is written in Python, so for our tutorial we’re going to use a simpler Java based solution. GitHub user edlio put together an MIT licensed email parser in Java based on one of GitHub’s libraries. We’re going to use this great library.

First let’s update our pom, we’re going to use https://jitpack.io to pull in EmailReplyParser.

<repositories>
	<repository>
		<id>jitpack.io</id>
		<url>https://jitpack.io</url>
	</repository>
</repositories>

Now add the GitHub dependency.

<dependency>
	<groupId>com.github.edlio</groupId>
	<artifactId>EmailReplyParser</artifactId>
	<version>v1.0</version>
</dependency>

We’re also going to use Apache commons email. We’re going to need to parse the raw email into a javax.mail MimeMessage before passing it off to the EmailReplyParser. Add the commons dependency.

<dependency>
	<groupId>org.apache.commons</groupId>
	<artifactId>commons-email</artifactId>
	<version>1.4</version>
</dependency>

Now we can head back over to our EmailSqsListener and finish up processEmail. At this point, we have the originating userID and the raw email body. The only thing left to do is parse out the reply.

To accomplish this, we’re going to use a combination of javax.mail and edlio’s EmailReplyParser.

private void processEmail(String emailAddress, String emailBody) throws Exception {
        //Parse out the email address
        Long userID = userEmailBean.userIDFromEmail(emailAddress);
        if(userID == null){
            //Whoops, we couldn't find a userID. Abort!
            return;
        }

        //Default javax.mail session
        Session session = Session.getDefaultInstance(new Properties());
        //Create a new mimeMessage out of the raw email body
        MimeMessage mimeMessage = MimeMessageUtils.createMimeMessage(
                session,
                emailBody
        );
        MimeMessageParser mimeMessageParser = new MimeMessageParser(mimeMessage);
        //Parse the message
        mimeMessageParser.parse();
        //Parse out the reply for our message
        String replyText = EmailReplyParser.parseReply(mimeMessageParser.getPlainContent());
        //Now we're done!
        //We have both the userID and the response!
        System.out.println("Processed reply for userID: " + userID + ". Reply: " + replyText);
    }

Wrap Up

And that’s it! We now have everything we need to deliver a response to the originating user!

See? I told you email can be fun!

In this article, we saw how Amazon Web Services can be used to orchestrate complex pipelines. Although in this article, the pipeline was designed around emails; these same tools can be leveraged to design even more complex systems, where you don’t have to worry about maintaining the infrastructure and can focus on the fun aspects of software engineering instead.

This article was written by Francis Altomare, a Toptal Java developer.

Go Programming Language: An Introductory Tutorial


What’s the Go Programming Language?

Go is a recent language which sits neatly in the middle of the landscape, providing lots of good features and deliberately omitting many bad ones. It compiles fast, runs fast-ish, includes a runtime and garbage collection, has a simple static type system and dynamic interfaces, and an excellent standard library.

Go and OOP

OOP is one of those features that Go deliberately omits. It has no subclassing, and so there are no inheritance diamonds or super calls or virtual methods to trip you up. Still, many of the useful parts of OOP are available in other ways.

*Mixins* are available by embedding structs anonymously, allowing their methods to be called directly on the containing struct (see embedding). Promoting methods in this way is called *forwarding*, and it’s not the same as subclassing: the method will still be invoked on the inner, embedded struct.

Embedding also doesn’t imply polymorphism. While `A` may have a `B`, that doesn’t mean it is a `B` — functions which take a `B` won’t take an `A` instead. For that, we need interfaces, which we’ll encounter briefly later.

Meanwhile, Go takes a strong position on features that can lead to confusion and bugs. It omits OOP idioms such as inheritance and polymorphism, in favor of composition and simple interfaces. It downplays exception handling in favour of explicit errors in return values. There is exactly one correct way to lay out Go code, enforced by the gofmt tool. And so on.

Go is also a great language for writing concurrent programs: programs with many independently running parts. An obvious example is a webserver: Every request runs separately, but requests often need to share resources such as sessions, caches, or notification queues. This means skilled Go programmers need to deal with concurrent access to those resources.

While the Go language has an excellent set of low-level features for handling concurrency, using them directly can become complicated. In many cases, a handful of reusable abstractions over those low-level mechanisms makes life much easier.

In today’s Go programming tutorial, we’re going to look at one such abstraction: A wrapper which can turn any data structure into a transactional service. We’ll use a Fund type as an example – a simple store for our startup’s remaining funding, where we can check the balance and make withdrawals.

In this introduction to programming in Go, we’ll build the service in small steps, making a mess along the way and then cleaning it up again. Along the way, we’ll encounter lots of cool Go features, including:

  • Struct types and methods
  • Unit tests and benchmarks
  • Goroutines and channels
  • Interfaces and dynamic typing

A Simple Fund

Let’s write some code to track our startup’s funding. The fund starts with a given balance, and money can only be withdrawn (we’ll figure out revenue later).

This graphic depicts a simple goroutine example using the Go programming language.

Go is deliberately not an object-oriented language: There are no classes, objects, or inheritance. Instead, we’ll declare a struct type called Fund, with a simple function to create new fund structs, and two public methods.

fund.go

package funding

type Fund struct {
    // balance is unexported (private), because it's lowercase
    balance int
}

// A regular function returning a pointer to a fund
func NewFund(initialBalance int) *Fund {
    // We can return a pointer to a new struct without worrying about
    // whether it's on the stack or heap: Go figures that out for us.
    return &Fund{
        balance: initialBalance,
    }
}

// Methods start with a *receiver*, in this case a Fund pointer
func (f *Fund) Balance() int {
    return f.balance
}

func (f *Fund) Withdraw(amount int) {
    f.balance -= amount
}

Testing with benchmarks

Next we need a way to test Fund. Rather than writing a separate program, we’ll use Go’s testing package, which provides a framework for both unit tests and benchmarks. The simple logic in our Fund isn’t really worth writing unit tests for, but since we’ll be talking a lot about concurrent access to the fund later on, writing a benchmark makes sense.

Benchmarks are like unit tests, but include a loop which runs the same code many times (in our case, fund.Withdraw(1)). This allows the framework to time how long each iteration takes, averaging out transient differences from disk seeks, cache misses, process scheduling, and other unpredictable factors.

The testing framework wants each benchmark to run for at least 1 second (by default). To ensure this, it will call the benchmark multiple times, passing in an increasing “number of iterations” value each time (the b.Nfield), until the run takes at least a second.

For now, our benchmark will just deposit some money and then withdraw it one dollar at a time.

fund_test.go

package funding

import "testing"

func BenchmarkFund(b *testing.B) {
    // Add as many dollars as we have iterations this run
    fund := NewFund(b.N)

    // Burn through them one at a time until they are all gone
    for i := 0; i < b.N; i++ {
        fund.Withdraw(1)
    }

    if fund.Balance() != 0 {
        b.Error("Balance wasn't zero:", fund.Balance())
    }
}

Now let’s run it:

$ go test -bench . funding
testing: warning: no tests to run
PASS
BenchmarkWithdrawals    2000000000             1.69 ns/op
ok      funding    3.576s

That went well. We ran two billion (!) iterations, and the final check on the balance was correct. We can ignore the “no tests to run” warning, which refers to the unit tests we didn’t write (in later Go programming examples in this tutorial, the warning is snipped out).

Concurrent Access

Now let’s make the benchmark concurrent, to model different users making withdrawals at the same time. To do that, we’ll spawn ten goroutines and have each of them withdraw one tenth of the money.

How would we structure muiltiple concurrent goroutines in the Go language?

Goroutines are the basic building block for concurrency in the Go language. They are green threads – lightweight threads managed by the Go runtime, not by the operating system. This means you can run thousands (or millions) of them without any significant overhead. Goroutines are spawned with the gokeyword, and always start with a function (or method call):

// Returns immediately, without waiting for `DoSomething()` to complete
go DoSomething()

Often, we want to spawn off a short one-time function with just a few lines of code. In this case we can use a closure instead of a function name:

go func() {
    // ... do stuff ...
}() // Must be a function *call*, so remember the ()

Once all our goroutines are spawned, we need a way to wait for them to finish. We could build one ourselves using channels, but we haven’t encountered those yet, so that would be skipping ahead.

For now, we can just use the WaitGroup type in Go’s standard library, which exists for this very purpose. We’ll create one (called “wg”) and call wg.Add(1) before spawning each worker, to keep track of how many there are. Then the workers will report back using wg.Done(). Meanwhile in the main goroutine, we can just say wg.Wait() to block until every worker has finished.

Inside the worker goroutines in our next example, we’ll use defer to call wg.Done().

defer takes a function (or method) call and runs it immediately before the current function returns, after everything else is done. This is handy for cleanup:

func() {
    resource.Lock()
    defer resource.Unlock()

    // Do stuff with resource
}()

This way we can easily match the Unlock with its Lock, for readability. More importantly, a deferred function will run even if there is a panic in the main function (something that we might handle via try-finally in other languages).

Lastly, deferred functions will execute in the reverse order to which they were called, meaning we can do nested cleanup nicely (similar to the C idiom of nested gotos and labels, but much neater):

func() {
    db.Connect()
    defer db.Disconnect()

    // If Begin panics, only db.Disconnect() will execute
    transaction.Begin()
    defer transaction.Close()

    // From here on, transaction.Close() will run first,
    // and then db.Disconnect()

    // ...
}()

OK, so with all that said, here’s the new version:

fund_test.go

package funding

import (
    "sync"
    "testing"
)

const WORKERS = 10

func BenchmarkWithdrawals(b *testing.B) {
    // Skip N = 1
    if b.N < WORKERS {
        return
    }

    // Add as many dollars as we have iterations this run
    fund := NewFund(b.N)

    // Casually assume b.N divides cleanly
    dollarsPerFounder := b.N / WORKERS

    // WaitGroup structs don't need to be initialized
    // (their "zero value" is ready to use).
    // So, we just declare one and then use it.
    var wg sync.WaitGroup

    for i := 0; i < WORKERS; i++ {
        // Let the waitgroup know we're adding a goroutine
        wg.Add(1)
        
        // Spawn off a founder worker, as a closure
        go func() {
            // Mark this worker done when the function finishes
            defer wg.Done()

            for i := 0; i < dollarsPerFounder; i++ {
                fund.Withdraw(1)
            }
            
        }() // Remember to call the closure!
    }

    // Wait for all the workers to finish
    wg.Wait()

    if fund.Balance() != 0 {
        b.Error("Balance wasn't zero:", fund.Balance())
    }
}

We can predict what will happen here. The workers will all execute Withdraw on top of each other. Inside it, f.balance -= amount will read the balance, subtract one, and then write it back. But sometimes two or more workers will both read the same balance, and do the same subtraction, and we’ll end up with the wrong total. Right?

$ go test -bench . funding
BenchmarkWithdrawals    2000000000             2.01 ns/op
ok      funding    4.220s

No, it still passes. What happened here?

Remember that goroutines are green threads – they’re managed by the Go runtime, not by the OS. The runtime schedules goroutines across however many OS threads it has available. At the time of writing this Go language tutorial, Go doesn’t try to guess how many OS threads it should use, and if we want more than one, we have to say so. Finally, the current runtime does not preempt goroutines – a goroutine will continue to run until it does something that suggests it’s ready for a break (like interacting with a channel).

All of this means that although our benchmark is now concurrent, it isn’t parallel. Only one of our workers will run at a time, and it will run until it’s done. We can change this by telling Go to use more threads, via the GOMAXPROCS environment variable.

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4    --- FAIL: BenchmarkWithdrawals-4
    account_test.go:39: Balance wasn't zero: 4238
ok      funding    0.007s

That’s better. Now we’re obviously losing some of our withdrawals, as we expected.

In this Go programming example, the outcome of multiple parallel goroutines is not favorable.

Make it a server

At this point we have various options. We could add an explicit mutex or read-write lock around the fund. We could use a compare-and-swap with a version number. We could go all out and use a CRDT scheme (perhaps replacing the balance field with lists of transactions for each client, and calculating the balance from those).

But we won’t do any of those things now, because they’re messy or scary or both. Instead, we’ll decide that a fund should be a server. What’s a server? It’s something you talk to. In Go, things talk via channels.

Channels are the basic communication mechanism between goroutines. Values are sent to the channel (with channel <- value), and can be received on the other side (with value = <- channel). Channels are “goroutine safe”, meaning that any number of goroutines can send to and receive from them at the same time.

Buffering

Buffering communication channels can be a performance optimization in certain circumstances, but it should be used with great care (and benchmarking!).

However, there are uses for buffered channels which aren’t directly about communication.

For instance, a common throttling idiom creates a channel with (for example) buffer size `10` and then sends ten tokens into it immediately. Any number of worker goroutines are then spawned, and each receives a token from the channel before starting work, and sends it back afterward. Then, however many workers there are, only ten will ever be working at the same time.

By default, Go channels are unbuffered. This means that sending a value to a channel will block until another goroutine is ready to receive it immediately. Go also supports fixed buffer sizes for channels (using make(chan someType, bufferSize)). However, for normal use, this is usually a bad idea.

Imagine a webserver for our fund, where each request makes a withdrawal. When things are very busy, the FundServer won’t be able to keep up, and requests trying to send to its command channel will start to block and wait. At that point we can enforce a maximum request count in the server, and return a sensible error code (like a 503 Service Unavailable) to clients over that limit. This is the best behavior possible when the server is overloaded.

Adding buffering to our channels would make this behavior less deterministic. We could easily end up with long queues of unprocessed commands based on information the client saw much earlier (and perhaps for requests which had since timed out upstream). The same applies in many other situations, like applying backpressure over TCP when the receiver can’t keep up with the sender.

In any case, for our Go example, we’ll stick with the default unbuffered behavior.

We’ll use a channel to send commands to our FundServer. Every benchmark worker will send commands to the channel, but only the server will receive them.

We could turn our Fund type into a server implementation directly, but that would be messy – we’d be mixing concurrency handling and business logic. Instead, we’ll leave the Fund type exactly as it is, and make FundServer a separate wrapper around it.

Like any server, the wrapper will have a main loop in which it waits for commands, and responds to each in turn. There’s one more detail we need to address here: The type of the commands.

A diagram of the fund being used as the server in this Go programming tutorial.

Pointers

We could have made our commands channel take *pointers* to commands (`chan *TransactionCommand`). Why didn’t we?

Passing pointers between goroutines is risky, because either goroutine might modify it. It’s also often less efficient, because the other goroutine might be running on a different CPU core (meaning more cache invalidation).

Whenever possible, prefer to pass plain values around.

In the next section below, we’ll be sending several different commands, each with its own struct type. We want the server’s Commands channel to accept any of them. In an OOP language we might do this via polymorphism: Have the channel take a superclass, of which the individual command types were subclasses. In Go, we use interfaces instead.

An interface is a set of method signatures. Any type that implements all of those methods can be treated as that interface (without being declared to do so). For our first run, our command structs won’t actually expose any methods, so we’re going to use the empty interface, interface{}. Since it has no requirements, any value (including primitive values like integers) satisfies the empty interface. This isn’t ideal – we only want to accept command structs – but we’ll come back to it later.

For now, let’s get started with the scaffolding for our server:

server.go

package funding

type FundServer struct {
    Commands chan interface{}
    fund Fund
}

func NewFundServer(initialBalance int) *FundServer {
    server := &FundServer{
        // make() creates builtins like channels, maps, and slices
        Commands: make(chan interface{}),
        fund: NewFund(initialBalance),
    }

    // Spawn off the server's main loop immediately
    go server.loop()
    return server
}

func (s *FundServer) loop() {
    // The built-in "range" clause can iterate over channels,
    // amongst other things
    for command := range s.Commands {
    
        // Handle the command
        
    }
}

Now let’s add a couple of struct types for the commands:

type WithdrawCommand struct {
    Amount int
}

type BalanceCommand struct {
    Response chan int
}

The WithdrawCommand just contains the amount to withdraw. There’s no response. The BalanceCommand does have a response, so it includes a channel to send it on. This ensures that responses will always go to the right place, even if our fund later decides to respond out-of-order.

Now we can write the server’s main loop:

func (s *FundServer) loop() {
    for command := range s.Commands {

        // command is just an interface{}, but we can check its real type
        switch command.(type) {

        case WithdrawCommand:
            // And then use a "type assertion" to convert it
            withdrawal := command.(WithdrawCommand)
            s.fund.Withdraw(withdrawal.Amount)

        case BalanceCommand:
            getBalance := command.(BalanceCommand)
            balance := s.fund.Balance()
            getBalance.Response <- balance

        default:
            panic(fmt.Sprintf("Unrecognized command: %v", command))
        }
    }
}

Hmm. That’s sort of ugly. We’re switching on the command type, using type assertions, and possibly crashing. Let’s forge ahead anyway and update the benchmark to use the server.

func BenchmarkWithdrawals(b *testing.B) {
    // ...

    server := NewFundServer(b.N)

    // ...

    // Spawn off the workers
    for i := 0; i < WORKERS; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for i := 0; i < dollarsPerFounder; i++ {
                server.Commands <- WithdrawCommand{ Amount: 1 }
            }
        }()
    }

    // ...

    balanceResponseChan := make(chan int)
    server.Commands <- BalanceCommand{ Response: balanceResponseChan }
    balance := <- balanceResponseChan

    if balance != 0 {
        b.Error("Balance wasn't zero:", balance)
    }
}

That was sort of ugly too, especially when we checked the balance. Never mind. Let’s try it:

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4     5000000           465 ns/op
ok      funding    2.822s

Much better, we’re no longer losing withdrawals. But the code is getting hard to read, and there are more serious problems. If we ever issue a BalanceCommand and then forget to read the response, our fund server will block forever trying to send it. Let’s clean things up a bit.

Make it a service

A server is something you talk to. What’s a service? A service is something you talk to with an API. Instead of having client code work with the command channel directly, we’ll make the channel unexported (private) and wrap the available commands up in functions.

type FundServer struct {
    commands chan interface{} // Lowercase name, unexported
    // ...
}

func (s *FundServer) Balance() int {
    responseChan := make(chan int)
    s.commands <- BalanceCommand{ Response: responseChan }
    return <- responseChan
}

func (s *FundServer) Withdraw(amount int) {
    s.commands <- WithdrawCommand{ Amount: amount }
}

Now our benchmark can just say server.Withdraw(1) and balance := server.Balance(), and there’s less chance of accidentally sending it invalid commands or forgetting to read responses.

Here is what using the fund as a service might look like in this sample Go language program.

There’s still a lot of extra boilerplate for the commands, but we’ll come back to that later.

Transactions

Eventually, the money always runs out. Let’s agree that we’ll stop withdrawing when our fund is down to its last ten dollars, and spend that money on a communal pizza to celebrate or commiserate around. Our benchmark will reflect this:

// Spawn off the workers
for i := 0; i < WORKERS; i++ {
    wg.Add(1)
    go func() {
        defer wg.Done()
        for i := 0; i < dollarsPerFounder; i++ {

            // Stop when we're down to pizza money
            if server.Balance() <= 10 {
                break
            }
            server.Withdraw(1)
        }
    }()
}

// ...

balance := server.Balance()
if balance != 10 {
    b.Error("Balance wasn't ten dollars:", balance)
}

This time we really can predict the result.

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4    --- FAIL: BenchmarkWithdrawals-4
    fund_test.go:43: Balance wasn't ten dollars: 6
ok      funding    0.009s

We’re back where we started – several workers can read the balance at once, and then all update it. To deal with this we could add some logic in the fund itself, like a minimumBalance property, or add another command called WithdrawIfOverXDollars. These are both terrible ideas. Our agreement is amongst ourselves, not a property of the fund. We should keep it in application logic.

What we really need is transactions, in the same sense as database transactions. Since our service executes only one command at a time, this is super easy. We’ll add a Transact command which contains a callback (a closure). The server will execute that callback inside its own goroutine, passing in the raw Fund. The callback can then safely do whatever it likes with the Fund.

Semaphores and errors

In this next example we’re doing two small things wrong.

First, we’re using a `Done` channel as a semaphore to let calling code know when its transaction has finished. That’s fine, but why is the channel type `bool`? We’ll only ever send `true` into it to mean “done” (what would sending `false` even mean?). What we really want is a single-state value (a value that has no value?). In Go, we can do this using the empty struct type: `struct{}`. This also has the advantage of using less memory. In the example we’ll stick with `bool` so as not to look too scary.

Second, our transaction callback isn’t returning anything. As we’ll see in a moment, we can get values out of the callback into calling code using scope tricks. However, transactions in a real system would presumably fail sometimes, so the Go convention would be to have the transaction return an `error` (and then check whether it was `nil` in calling code).

We’re not doing that either for now, since we don’t have any errors to generate.

// Typedef the callback for readability
type Transactor func(fund *Fund)

// Add a new command type with a callback and a semaphore channel
type TransactionCommand struct {
    Transactor Transactor
    Done chan bool
}

// ...

// Wrap it up neatly in an API method, like the other commands
func (s *FundServer) Transact(transactor Transactor) {
    command := TransactionCommand{
        Transactor: transactor,
        Done: make(chan bool),
    }
    s.commands <- command
    <- command.Done
}

// ...

func (s *FundServer) loop() {
    for command := range s.commands {
        switch command.(type) {
        // ...

        case TransactionCommand:
            transaction := command.(TransactionCommand)
            transaction.Transactor(s.fund)
            transaction.Done <- true

        // ...
        }
    }
}

Our transaction callbacks don’t directly return anything, but the Go language makes it easy to get values out of a closure directly, so we’ll do that in the benchmark to set the pizzaTime flag when money runs low:

pizzaTime := false
for i := 0; i < dollarsPerFounder; i++ {

    server.Transact(func(fund *Fund) {
        if fund.Balance() <= 10 {
            // Set it in the outside scope
            pizzaTime = true
            return
        }
        fund.Withdraw(1)
    })

    if pizzaTime {
        break
    }
}

And check that it works:

$ GOMAXPROCS=4 go test -bench . funding
BenchmarkWithdrawals-4     5000000           775 ns/op
ok      funding    4.637s

Nothing But transactions

You may have spotted an opportunity to clean things up some more now. Since we have a generic Transactcommand, we don’t need WithdrawCommand or BalanceCommand anymore. We’ll rewrite them in terms of transactions:

func (s *FundServer) Balance() int {
    var balance int
    s.Transact(func(f *Fund) {
        balance = f.Balance()
    })
    return balance
}

func (s *FundServer) Withdraw(amount int) {
    s.Transact(func (f *Fund) {
        f.Withdraw(amount)
    })
}

Now the only command the server takes is TransactionCommand, so we can remove the whole interface{}mess in its implementation, and have it accept only transaction commands:

type FundServer struct {
    commands chan TransactionCommand
    fund *Fund
}

func (s *FundServer) loop() {
    for transaction := range s.commands {
        // Now we don't need any type-switch mess
        transaction.Transactor(s.fund)
        transaction.Done <- true
    }
}

Much better.

There’s a final step we could take here. Apart from its convenience functions for Balance and Withdraw, the service implementation is no longer tied to Fund. Instead of managing a Fund, it could manage an interface{} and be used to wrap anything. However, each transaction callback would then have to convert the interface{} back to a real value:

type Transactor func(interface{})

server.Transact(func(managedValue interface{}) {
    fund := managedValue.(*Fund)
    // Do stuff with fund ...
})

This is ugly and error-prone. What we really want is compile-time generics, so we can “template” out a server for a particular type (like *Fund).

Unfortunately, Go doesn’t support generics – yet. It’s expected to arrive eventually, once someone figures out some sensible syntax and semantics for it. In the meantime, careful interface design often removes the need for generics, and when they don’t we can get by with type assertions (which are checked at runtime).

So we’re done, right?

Yes.

Well, okay, no.

For instance:

  • A panic in a transaction will kill the whole service.
  • There are no timeouts. A transaction that never returns will block the service forever.
  • If our Fund grows some new fields and a transaction crashes halfway through updating them, we’ll have inconsistent state.
  • Transactions are able to leak the managed Fund object, which isn’t good.
  • There’s no reasonable way to do transactions across multiple funds (like withdrawing from one and depositing in another). We can’t just nest our transactions because it would allow deadlocks.
  • Running a transaction asynchronously now requires a new goroutine and a lot of messing around. Relatedly, we probably want to be able to read the most recent Fund state from elsewhere while a long-running transaction is in progress.

In the next Go programming tutorial, we’ll look at some ways to address these issues.

This article was written by Brendon Hogger, a Toptal Python developer.

Introduction to Kotlin: Android Programming For Humans


In a perfect Android world, the main language of Java is really modern, clear, and elegant. You can write less by doing more, and whenever a new feature appears, developers can use it just by increasing version in Gradle. Then while creating a very nice app, it appears fully testable, extensible, and maintainable. Our activities are not too large and complicated, we can change data sources from database to web without tons of differences, and so on. Sounds great, right? Unfortunately, the Android world isn’t this ideal. Google is still striving for perfection, but we all know that ideal worlds don’t exist. Thus, we have to help ourselves in that great journey in the Android world.

Can Kotlin replace Java?

Kotlin is a popular new player in the Android world. But can it ever replace Java?

What Is Kotlin, and Why Should You Use It?

So, the first language. I think that Java isn’t the master of elegance or clarity, and it is neither modern nor expressive (and I’m guessing you agree). The disadvantage is that below Android N, we are still limited to Java 6 (including some small parts of Java 7). Developers can also attach RetroLambda to use lambda expressions in their code, which is very useful while using RxJava. Above Android N, we can use some of Java 8’s new functionalities, but it’s still that old, heavy Java. Very often I hear Android developers say “I wish Android supported a nicer language, like iOS does with Swift”. And what if I told you that you can use a very nice, simple language, with null safety, lambdas, and many other nice new features? Welcome to Kotlin.

What is Kotlin?

Kotlin is a new language (sometimes referred to as Swift for Android), developed by the JetBrains team, and is now in its 1.0.2 version. What makes it useful in Android development is that it compiles to JVM bytecode, and can be also compiled to JavaScript. It is fully compatible with Java, and Kotlin code can be simply converted to Java code and vice versa (there is a plugin from JetBrains). That means Kotlin can use any framework, library etc. written in Java. On Android, it integrates by Gradle. If you have an existing Android app and you want to implement a new feature in Kotlin without rewriting the whole app, just start writing in Kotlin, and it will work.

But what are the ‘great new features’? Let me list a few:

Optional and Named Function Parameters

fun createDate(day: Int, month: Int, year: Int, hour: Int = 0, minute: Int = 0, second: Int = 0) {
   print("TEST", "$day-$month-$year $hour:$minute:$second")
}

We Can Call Method createDate in Different Ways

createDate(1,2,2016) prints:  ‘1-2-2016 0:0:0’
createDate(1,2,2016, 12) prints: ‘1-2-2016 12:0:0’
createDate(1,2,2016, minute = 30) prints: ‘1-2-2016 0:30:0’

Null Safety

If a variable can be null, code will not compile unless we force them to make it. The following code will have an error – nullableVar may be null:

var nullableVar: String? = “”;
nullableVar.length;

To compile, we have to check if it’s not null:

if(nullableVar){
	nullableVar.length
}

Or, shorter:

nullableVar?.length

This way, if nullableVar is null, nothing happens. Otherwise, we can mark variable as not nullable, without a question mark after type:

var nonNullableVar: String = “”;
nonNullableVar.length;

This code compiles, and if we want to assign null to nonNullableVar, compiler will show an error.

There is also very useful Elvis operator:

var stringLength = nullableVar?.length ?: 0 

Then, when nullableVar is null (so nullableVar?.length returns null), stringLength will have value 0.

Mutable and Immutable Variables

In the example above, I use var when defining a variable. This is mutable, we can reassign it whenever we want. If we want that variable to be immutable (in many cases we should), we use val (as value, not variable):

val immutable: Int = 1

After that, the compiler will not allow us to reassign to immutable.

Lambdas

We all know what a lambda is, so here I will just show how we can use it in Kotlin:

button.setOnClickListener({ view -> Log.d("Kotlin","Click")})

Or if the function is the only or last argument:

button.setOnClickListener { Log.d("Kotlin","Click")}

Extensions

Extensions are a very helpful language feature, thanks to which we can “extend” existing classes, even when they are final or we don’t have access to their source code.

For example, to get a string value from edit text, instead of writing every time editText.text.toString() we can write the function:

fun EditText.textValue(): String{
   return text.toString()
}

Or shorter:

fun EditText.textValue() = text.toString()

And now, with every instance of EditText:

editText.textValue()

Or, we can add a property returning the same:

var EditText.textValue: String
   get() = text.toString()
   set(v) {setText(v)}

Operator Overloading

Sometimes useful if we want to add, multiply, or compare objects. Kotlin allows overloading of binary operators (plus, minus, plusAssign, range, etc.), array operators ( get, set, get range, set range), and equals and unary operations (+a, -a, etc.)

Data Class

How many lines of code do you need to implement a User class in Java with three properties: copy, equals, hashCode, and toString? In Kaotlin you need only one line:

data class User(val name: String, val surname: String, val age: Int)

This data class provides equals(), hashCode() and copy() methods, and also toString(), which prints User as:

User(name=John, surname=Doe, age=23)

Data classes also provide some other useful functions and properties, which you can see in Kotlin documentation.

Anko Extensions

You use Butterknife or Android extensions, don’t you? What if you don’t need to even use this library, and after declaring views in XML just use it from code by its ID (like with XAML in C#):

<Button
        android:id="@+id/loginBtn"
        android:layout_width="match_parent"
        android:layout_height="wrap_content" />
loginBtn.setOnClickListener{}

Kotlin has very useful Anko extensions, and with this you don’t need to tell your activity what is loginBtn, it knows it just by “importing” xml:

import kotlinx.android.synthetic.main.activity_main.*

There are many other useful things in Anko, including starting activities, showing toasts, and so on. This is not the main goal of Anko – it is designed for easily creating layouts from code. So if you need to create a layout programmatically, this is the best way.

This is only a short view of Kotlin. I recommend reading Antonio Leiva’s blog and his book – Kotlin for Android Developers, and of course the official Kotlin site.

What Is MVP and Why?

A nice, powerful, and clear language is not enough. It’s very easy to write messy apps with every language without good architecture. Android developers (mostly ones who are getting started, but also more advanced ones) often give Activity responsibility for everything around them. Activity (or Fragment, or other view) downloads data, sends to save, presents them, responds to user interactions, edits data, manages all child views . . . and often much more. It’s too much for such unstable objects like Activities or Fragments (it’s enough to rotate the screen and Activity says ‘Goodbye….’).

A very good idea is to isolate responsibilities from views and make them as stupid as possible. Views (Activities, Fragments, custom views, or whatever presents data on screen) should be only responsible for managing their subviews. Views should have presenters, who will communicate with model, and tell them what they should do. This, in short, is the Model-View-Presenter pattern (for me, it should be named Model-Presenter-View to show connections between layers).

MVC vs MVP

“Hey, I know something like that, and it’s called MVC!” – didn’t you think? No, MVP is not the same as MVC. In the MVC pattern, your view can communicate with model. While using MVP, you don’t allow any communication between these two layers – the only way View can communicate with Model is through Presenter. The only thing that View knows about Model can be the data structure. View knows how to, for example, display User, but doesn’t know when. Here’s a simple example:

View knows “I’m Activity, I have two EditTexts and one Button. When somebody clicks the button, I should tell it to my presenter, and pass him EditTexts’ values. And that’s all, I can sleep until next click or presenter tells me what to do.”

Presenter knows that somewhere is a View, and he knows what operations this View can perform. He also knows that when he receives two strings, he should create User from these two strings and send data to model to save, and if save is successful, tell the view ‘Show success info’.

Model just knows where data is, where they should be saved, and what operations should be performed on the data.

Applications written in MVP are easy to test, maintain, and reuse. A pure presenter should know nothing about the Android platform. It should be pure Java (or Kotlin, in our case) class. Thanks to this we can reuse our presenter in other projects. We can also easily write unit tests, testing separately Model, View and Presenter.

MVP pattern leads to better, less complex codebase by keeping user interface and business logic truly separate.

A little digression: MVP should be a part of Uncle Bob’s Clean Architecture to make applications even more flexible and nicely architectured. I’ll try to write about that next time.

Sample App with MVP and Kotlin

That’s enough theory, let’s see some code! Okay, let’s try to create a simple app. The main goal for this app is to create user. First screen will have two EditTexts (Name and Surname) and one Button (Save). After entering name and surname and clicking ‘Save’, the app should show ‘User is saved’ and go to the next screen, where saved name and surname is presented. When name or surname is empty, the app should not save user and show an error indicating what’s wrong.

The first thing after creating Android Studio project is to configure Kotlin. You should install Kotlin plugin, and, after restart, in Tools > Kotlin you can click ‘Configure Kotlin in Project’. IDE will add Kotlin dependencies to Gradle. If you have any existing code, you can easily convert it to Kotlin by (Ctrl+Shift+Alt+K or Code > Convert Java file to Kotlin). If something is wrong and the project does not compile, or Gradle does not see Kotlin, you can check code of the app available on GitHub.

Kotlin not only interoperates well with Java frameworks and libraries, it allows you to continue using most of the same tools that you are already familiar with.

Now that we have a project, let’s start by creating our first view – CreateUserView. This view should have the functionalities mentioned earlier, so we can write an interface for that:

interface CreateUserView : View {
   fun showEmptyNameError() /* show error when name is empty */
   fun showEmptySurnameError() /* show error when surname is empty */
   fun showUserSaved() /* show user saved info */
   fun showUserDetails(user: User) /* show user details */
}

As you can see, Kotlin is similar to Java in declaring functions. All of those are functions that return nothing, and the last have one parameter. This is the difference, parameter type comes after name. The View interface is not from Android – it’s our simple, empty interface:

interface View

Basic Presenter’s interface should have a property of View type, and at least on method (onDestroy for example), where this property will be set to null:

interface Presenter<T : View> {
   var view: T?

   fun onDestroy(){
       view = null
   }
}

Here you can see another Kotlin feature – you can declare properties in interfaces, and also implement methods there.

Our CreateUserView needs to communicate with CreateUserPresenter. The only additional function that this Presenter needs is saveUser with two string arguments:

interface CreateUserPresenter<T : View>: Presenter<T> {
   fun saveUser(name: String, surname: String)
}

We also need Model definition – it’s mentioned earlier data class:

data class User(val name: String, val surname: String)

After declaring all interfaces, we can start implementing.

CreateUserPresenter will be implemented in CreateUserPresenterImpl:

class CreateUserPresenterImpl(override var view: CreateUserView?): CreateUserPresenter<CreateUserView> {

   override fun saveUser(name: String, surname: String) {
   }
}

The first line, with class definition:

CreateUserPresenterImpl(override var view: CreateUserView?)

Is a constructor, we use it for assigning view property, defined in interface.

MainActivity, which is our CreateUserView implementation, needs a reference to CreateUserPresenter:

class MainActivity : AppCompatActivity(), CreateUserView {

   private val presenter: CreateUserPresenter<CreateUserView> by lazy {
       CreateUserPresenterImpl(this)
   }

   override fun onCreate(savedInstanceState: Bundle?) {
       super.onCreate(savedInstanceState)
       setContentView(R.layout.activity_main)

       saveUserBtn.setOnClickListener{
           presenter.saveUser(userName.textValue(), userSurname.textValue()) /*use of textValue() extension, mentioned earlier */


       }
   }

   override fun showEmptyNameError() {
       userName.error = getString(R.string.name_empty_error) /* it's equal to userName.setError() - Kotlin allows us to use property */

   }

   override fun showEmptySurnameError() {
       userSurname.error = getString(R.string.surname_empty_error)
   }

   override fun showUserSaved() {
       toast(R.string.user_saved) /* anko extension - equal to Toast.makeText(this, R.string.user_saved, Toast.LENGTH_LONG) */

   }

   override fun showUserDetails(user: User) {
      
   }

override fun onDestroy() {
   presenter.onDestroy()
}
}

At the beginning of the class, we defined our presenter:

private val presenter: CreateUserPresenter<CreateUserView> by lazy {
       CreateUserPresenterImpl(this)
}

It is defined as immutable (val), and is created by lazy delegate, which will be assigned the first time it is needed. Moreover, we are sure that it will not be null (no question mark after definition).

When the User clicks on the Save button, View sends information to Presenter with EditTexts values. When that happens, User should be saved, so we have to implement saveUser method in Presenter (and some of the Model’s functions):

override fun saveUser(name: String, surname: String) {
   val user = User(name, surname)
   when(UserValidator.validateUser(user)){
       UserError.EMPTY_NAME -> view?.showEmptyNameError()
       UserError.EMPTY_SURNAME -> view?.showEmptySurnameError()
       UserError.NO_ERROR -> {
           UserStore.saveUser(user)
view?.showUserSaved()
           view?.showUserDetails(user)
       }
   }
}

When a User is created, it is sent to UserValidator to check for validity. Then, according to the result of validation, proper method is called. The when() {} construct is same as switch/case in Java. But it is more powerful – Kotlin allows use of not only enum or int in ‘case’, but also ranges, strings or object types. It must contain all possibilities or have an else expression. Here, it covers all UserError values.

By using view?.showEmptyNameError() (with a question mark after view), we are protected from NullPointer. View can be nulled in onDestroy method, and with this construction, nothing will happen.

When a User model has no errors, it tells UserStore to save it, and then instruct View to show success and show details.

As mentioned earlier, we have to implement some model things:

enum class UserError {
   EMPTY_NAME,
   EMPTY_SURNAME,
   NO_ERROR
}

object UserStore {
   fun saveUser(user: User){
       //Save user somewhere: Database, SharedPreferences, send to web...
   }
}

object UserValidator {

   fun validateUser(user: User): UserError {
       with(user){
           if(name.isNullOrEmpty()) return UserError.EMPTY_NAME
           if(surname.isNullOrEmpty()) return UserError.EMPTY_SURNAME
       }

       return UserError.NO_ERROR
   }
}

The most interesting thing here is UserValidator. By using the object word, we can create a singleton class, with no worries about threads, private constructors and so on.

Next thing – in validateUser(user) method, there is with(user) {} expression. Code within such block is executed in context of object, passed in with name and surname are User’s properties.

There is also another little thing. All above code, from enum to UserValidator, definition is placed in one file (definition of User class is also here). Kotlin does not force you to have each public class in single file (or name class exactly as file). Thus, if you have some short pieces of related code (data classes, extensions, functions, constants – Kotlin doesn’t require class for function or constant), you can place it in one single file instead of spreading through all files in project.

When a user is saved, our app should display that. We need another View – it can be any Android view, custom View, Fragment or Activity. I chose Activity.

So, let’s define UserDetailsView interface. It can show user, but it should also show an error when user is not present:

interface UserDetailsView {
   fun showUserDetails(user: User)
   fun showNoUserError()
}

Next, UserDetailsPresenter. It should have a user property:

interface UserDetailsPresenter<T: View>: Presenter<T> {
   var user: User?
}

This interface will be implemented in UserDetailsPresenterImpl. It has to override user property. Every time this property is assigned, user should be refreshed on the view. We can use a property setter for this:

class UserDetailsPresenterImpl(override var view: UserDetailsView?): UserDetailsPresenter<UserDetailsView> {
   override var user: User? = null
       set(value) {
           field = value
           if(field != null){
               view?.showUserDetails(field!!)
           } else {
               view?.showNoUserError()
           }
       }

}

UserDetailsView implementation, UserDetailsActivity, is very simple. Just like before, we have a presenter object created by lazy loading. User to display should be passed via intent. There is one little problem with this for now, and we will solve it in a moment. When we have user from intent, View needs to assign it to their presenter. After that, user will be refreshed on the screen, or, if it is null, the error will appear (and activity will finish – but presenter doesn’t know about that):

class UserDetailsActivity: AppCompatActivity(), UserDetailsView {

   private val presenter: UserDetailsPresenter<UserDetailsView> by lazy {
       UserDetailsPresenterImpl(this)
   }

   override fun onCreate(savedInstanceState: Bundle?) {
       super.onCreate(savedInstanceState)
       setContentView(R.layout.activity_user_details)

       val user = intent.getParcelableExtra<User>(USER_KEY)
       presenter.user = user
   }

   override fun showUserDetails(user: User) {
       userFullName.text = "${user.name} ${user.surname}"
   }

   override fun showNoUserError() {
       toast(R.string.no_user_error)
       finish()
   }

override fun onDestroy() {
   presenter.onDestroy()
}

}

Passing objects via intents requires that this object implements Parcelable interface. This is very ‘dirty’ work. Personally, I hate doing this because of all of the CREATORs, properties, saving, restoring, and so on. Fortunately, there is proper plugin, Parcelable for Kotlin. After installing it, we can generate Parcelable just with one click.

The last thing to do is to implement showUserDetails(user: User) in our MainActivity:

override fun showUserDetails(user: User) {
   startActivity<UserDetailsActivity>(USER_KEY to user) /* anko extension - starts UserDetailsActivity and pass user as USER_KEY in intent */
}

And that’s all.

Demo Android app in Kotlin

We have a simple app that saves a User (actually, it is not saved, but we can add this functionality without touching presenter or view) and presents it on the screen. In the future, if we want to change the way user is presented on the screen, such as from two activities to two fragments in one activity, or two custom views, changes will be only in View classes. Of course, if we don’t change functionality or model’s structure. Presenter, who doesn’t know what exactly View is, won’t need any changes.

What’s Next?

In our app, Presenter is created every time an activity is created. This approach, or its opposite, if Presenter should persist across activity instances, is a subject of much discussion across the Internet. For me, it depends on the app, its needs, and the developer. Sometimes it’s better is to destroy presenter, sometimes not. If you decide to persist one, a very interesting technique is to use LoaderManager for that.

As mentioned before, MVP should be a part of Uncle Bob’s Clean architecture. Moreover, good developers should use Dagger to inject presenters dependencies to activities. It also helps maintain, test, and reuse code in future. Currently, Kotlin works very well with Dagger (before the official release it wasn’t so easy), and also with other helpful Android libraries.

Wrap Up

For me, Kotlin is a great language. It’s modern, clear, and expressive while still being developed by great people. And we can use any new release on any Android device and version. Whatever makes me angry at Java, Kotlin improves.

Of course, as I said nothing is ideal. Kotlin also have some disadvantages. The newest gradle plugin versions (mainly from alpha or beta) don’t work well with this language. Many people complain that build time is a bit longer than pure Java, and apks have some additional MB’s. But Android Studio and Gradle are still improving, and phones have more and more space for apps. That’s why I believe Kotlin can be a very nice language for every Android developer. Just give it a try, and share in the comments section below what you think.

Source code of the sample app is available on Github: github.com/tomaszczura/AndroidMVPKotlin

This article was written by Tomasz Czura, a Toptal Java developer.

Scaling Scala: How to Dockerize Using Kubernetes


Kubernetes is the new kid on the block, promising to help deploy applications into the cloud and scale them more quickly. Today, when developing for a microservices architecture, it’s pretty standard to choose Scala for creating API servers.

Microservices are replacing classic monolithic back-end servers with multiple independent services that communicate among themselves and have their own processes and resources.

If there is a Scala application in your plans and you want to scale it into a cloud, then you are at the right place. In this article I am going to show step-by-step how to take a generic Scala application and implement Kubernetes with Docker to launch multiple instances of the application. The final result will be a single application deployed as multiple instances, and load balanced by Kubernetes.

All of this will be implemented by simply importing the Kubernetes source kit in your Scala application. Please note, the kit hides a lot of complicated details related to installation and configuration, but it is small enough to be readable and easy to understand if you want to analyze what it does. For simplicity, we will deploy everything on your local machine. However, the same configuration is suitable for a real-world cloud deployment of Kubernetes.

Scale Your Scala Application with Kubernetes

Be smart and sleep tight, scale your Docker with Kubernetes.

What is Kubernetes?

Before going into the gory details of the implementation, let’s discuss what Kubernetes is and why it’s important.

You may have already heard of Docker. In a sense, it is a lightweight virtual machine.

Docker gives the advantage of deploying each server in an isolated environment, very similar to a stand-alone virtual machine, without the complexity of managing a full-fledged virtual machine.

For these reasons, it is already one of the more widely used tools for deploying applications in clouds. A Docker image is pretty easy and fast to build and duplicable, much easier than a traditional virtual machine like VMWare, VirtualBox, or XEN.

Kubernetes complements Docker, offering a complete environment for managing dockerized applications. By using Kubernetes, you can easily deploy, configure, orchestrate, manage, and monitor hundreds or even thousands of Docker applications.

Kubernetes is an open source tool developed by Google and has been adopted by many other vendors. Kubernetes is available natively on the Google cloud platform, but other vendors have adopted it for their OpenShift cloud services too. It can be found on Amazon AWS, Microsoft Azure, RedHat OpenShift, and even more cloud technologies. We can say it is well positioned to become a standard for deploying cloud applications.

Prerequisites

Now that we covered the basics, let’s check if you have all the prerequisite software installed. First of all, you need Docker. If you are using either Windows or Mac, you need the Docker Toolbox. If you are using Linux, you need to install the particular package provided by your distribution or simply follow the official directions.

We are going to code in Scala, which is a JVM language. You need, of course, the Java Development Kit and the scala SBT tool installed and available in the global path. If you are already a Scala programmer, chances are you have those tools already installed.

If you are using Windows or Mac, Docker will by default create a virtual machine named default with only 1GB of memory, which can be too small for running Kubernetes. In my experience, I had issues with the default settings. I recommend that you open the VirtualBox GUI, select your virtual machine default, and change the memory to at least to 2048MB.

VirtualBox memory settings

The Application to Clusterize

The instructions in this tutorial can apply to any Scala application or project. For this article to have some “meat” to work on, I chose an example used very often to demonstrate a simple REST microservice in Scala, called Akka HTTP. I recommend you try to apply source kit to the suggested example before attempting to use it on your application. I have tested the kit against the demo application, but I cannot guarantee that there will be no conflicts with your code.

So first, we start by cloning the demo application:

git clone https://github.com/theiterators/akka-http-microservice

Next, test if everything works correctly:

cd akka-http-microservice
sbt run

Then, access to http://localhost:9000/ip/8.8.8.8, and you should see something like in the following image:

Akka HTTP microservice is running

Adding the Source Kit

Now, we can add the source kit with some Git magic:

git remote add ScalaGoodies https://github.com/sciabarra/ScalaGoodies
git fetch --all
git merge ScalaGoodies/kubernetes

With that, you have the demo including the source kit, and you are ready to try. Or you can even copy and paste the code from there into your application.

Once you have merged or copied the files in your projects, you are ready to start.

Starting Kubernetes

Once you have downloaded the kit, we need to download the necessary kubectl binary, by running:

bin/install.sh

This installer is smart enough (hopefully) to download the correct kubectl binary for OSX, Linux, or Windows, depending on your system. Note, the installer worked on the systems I own. Please do report any issues, so that I can fix the kit.

Once you have installed the kubectl binary, you can start the whole Kubernetes in your local Docker. Just run:

bin/start-local-kube.sh

The first time it is run, this command will download the images of the whole Kubernetes stack, and a local registry needed to store your images. It can take some time, so please be patient. Also note, it needs direct accesses to the internet. If you are behind a proxy, it will be a problem as the kit does not support proxies. To solve it, you have to configure the tools like Docker, curl, and so on to use the proxy. It is complicated enough that I recommend getting a temporary unrestricted access.

Assuming you were able to download everything successfully, to check if Kubernetes is running fine, you can type the following command:

bin/kubectl get nodes

The expected answer is:

NAME        STATUS    AGE
127.0.0.1   Ready     2m

Note that age may vary, of course. Also, since starting Kubernetes can take some time, you may have to invoke the command a couple of times before you see the answer. If you do not get errors here, congratulations, you have Kubernetes up and running on your local machine.

Dockerizing Your Scala App

Now that you have Kubernetes up and running, you can deploy your application in it. In the old days, before Docker, you had to deploy an entire server for running your application. With Kubernetes, all you need to do to deploy your application is:

  • Create a Docker image.
  • Push it in a registry from where it can be launched.
  • Launch the instance with Kubernetes, that will take the image from the registry.

Luckily, it is way less complicated that it looks, especially if you are using the SBT build tool like many do.

In the kit, I included two files containing all the necessary definitions to create an image able to run Scala applications, or at least what is needed to run the Akka HTTP demo. I cannot guarantee that it will work with any other Scala applications, but it is a good starting point, and should work for many different configurations. The files to look for building the Docker image are:

docker.sbt
project/docker.sbt

Let’s have a look at what’s in them. The file project/docker.sbt contains the command to import the sbt-docker plugin:

addSbtPlugin("se.marcuslonnberg" % "sbt-docker" % "1.4.0")

This plugin manages the building of the Docker image with SBT for you. The Docker definition is in the docker.sbt file and looks like this:

imageNames in docker := Seq(ImageName("localhost:5000/akkahttp:latest"))

dockerfile in docker := {
  val jarFile: File = sbt.Keys.`package`.in(Compile, packageBin).value
  val classpath = (managedClasspath in Compile).value
  val mainclass = mainClass.in(Compile, packageBin).value.getOrElse(sys.error("Expected exactly one main class"))
  val jarTarget = s"/app/${jarFile.getName}"
  val classpathString = classpath.files.map("/app/" + _.getName)
    .mkString(":") + ":" + jarTarget
  new Dockerfile {
    from("anapsix/alpine-java:8")
    add(classpath.files, "/app/")
    add(jarFile, jarTarget)
    entryPoint("java", "-cp", classpathString, mainclass)
  }
}

To fully understand the meaning of this file, you need to know Docker well enough to understand this definition file. However, we are not going into the details of the Docker definition file, because you do not need to understand it thoroughly to build the image.

The beauty of using SBT for building the Docker image is that
the SBT will take care of collecting all the files for you.

Note the classpath is automatically generated by the following command:

val classpath = (managedClasspath in Compile).value

In general, it is pretty complicated to gather all the JAR files to run an application. Using SBT, the Docker file will be generated with add(classpath.files, "/app/"). This way, SBT collects all the JAR files for you and constructs a Dockerfile to run your application.

The other commands gather the missing pieces to create a Docker image. The image will be built using an existing image APT to run Java programs (anapsix/alpine-java:8, available on the internet in the Docker Hub). Other instructions are adding the other files to run your application. Finally, by specifying an entry point, we can run it. Note also that the name starts with localhost:5000 on purpose, because localhost:5000 is where I installed the registry in the start-kube-local.sh script.

Building the Docker Image with SBT

To build the Docker image, you can ignore all the details of the Dockerfile. You just need to type:

sbt dockerBuildAndPush

The sbt-docker plugin will then build a Docker image for you, downloading from the internet all the necessary pieces, and then it will push to a Docker registry that was started before, together with the Kubernetes application in localhost. So, all you need is to wait a little bit more to have your image cooked and ready.

Note, if you experience problems, the best thing to do is to reset everything to a known state by running the following commands:

bin/stop-kube-local.sh
bin/start-kube-local.sh

Those commands should stop all the containers and restart them correctly to get your registry ready to receive the image built and pushed by sbt.

Starting the Service in Kubernetes

Now that the application is packaged in a container and pushed in a registry, we are ready to use it. Kubernetes uses the command line and configuration files to manage the cluster. Since command lines can become very long, and also be able to replicate the steps, I am using the configurations files here. All the samples in the source kit are in the folder kube.

Our next step is to launch a single instance of the image. A running image is called, in the Kubernetes language, a pod. So let’s create a pod by invoking the following command:

bin/kubectl create -f kube/akkahttp-pod.yml

You can now inspect the situation with the command:

bin/kubectl get pods

You should see:

NAME                   READY     STATUS    RESTARTS   AGE
akkahttp               1/1       Running   0          33s
k8s-etcd-127.0.0.1     1/1       Running   0          7d
k8s-master-127.0.0.1   4/4       Running   0          7d
k8s-proxy-127.0.0.1    1/1       Running   0          7d

Status actually can be different, for example, “ContainerCreating”, it can take a few seconds before it becomes “Running”. Also, you can get another status like “Error” if, for example, you forget to create the image before.

You can also check if your pod is running with the command:

bin/kubectl logs akkahttp

You should see an output ending with something like this:

[DEBUG] [05/30/2016 12:19:53.133] [default-akka.actor.default-dispatcher-5] [akka://default/system/IO-TCP/selectors/$a/0] Successfully bound to /0:0:0:0:0:0:0:0:9000

Now you have the service up and running inside the container. However, the service is not yet reachable. This behavior is part of the design of Kubernetes. Your pod is running, but you have to expose it explicitly. Otherwise, the service is meant to be internal.

Creating a Service

Creating a service and checking the result is a matter of executing:

bin/kubectl create -f kube/akkahttp-service.yaml
bin/kubectl get svc

You should see something like this:

NAME               CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
akkahttp-service   10.0.0.54                  9000/TCP   44s
kubernetes         10.0.0.1     <none>        443/TCP    3m

Note that the port can be different. Kubernetes allocated a port for the service and started it. If you are using Linux, you can directly open the browser and type http://10.0.0.54:9000/ip/8.8.8.8 to see the result. If you are using Windows or Mac with Docker Toolbox, the IP is local to the virtual machine that is running Docker, and unfortunately it is still unreachable.

I want to stress here that this is not a problem of Kubernetes, rather it is a limitation of the Docker Toolbox, which in turn depends on the constraints imposed by virtual machines like VirtualBox, which act like a computer within another computer. To overcome this limitation, we need to create a tunnel. To make things easier, I included another script which opens a tunnel on an arbitrary port to reach any service we deployed. You can type the following command:

bin/forward-kube-local.sh akkahttp-service 9000

Note that the tunnel will not run in the background, you have to keep the terminal window open as long as you need it and close when you do not need the tunnel anymore. While the tunnel is running, you can open: http://localhost:9000/ip/8.8.8.8 and finally see the application running in Kubernetes.

Final Touch: Scale

So far we have “simply” put our application in Kubernetes. While it is an exciting achievement, it does not add too much value to our deployment. We’re saved from the effort of uploading and installing on a server and configuring a proxy server for it.

Where Kubernetes shines is in scaling. You can deploy two, ten, or one hundred instances of our application by only changing the number of replicas in the configuration file. So let’s do it.

We are going to stop the single pod and start a deployment instead. So let’s execute the following commands:

bin/kubectl delete -f kube/akkahttp-pod.yml
bin/kubectl create -f kube/akkahttp-deploy.yaml

Next, check the status. Again, you may try a couple of times because the deployment can take some time to be performed:

NAME                                   READY     STATUS    RESTARTS   AGE
akkahttp-deployment-4229989632-mjp6u   1/1       Running   0          16s
akkahttp-deployment-4229989632-s822x   1/1       Running   0          16s
k8s-etcd-127.0.0.1                     1/1       Running   0          6d
k8s-master-127.0.0.1                   4/4       Running   0          6d
k8s-proxy-127.0.0.1                    1/1       Running   0          6d

Now we have two pods, not one. This is because in the configuration file I provided, there is the value replica: 2, with two different names generated by the system. I am not going into the details of the configuration files, because the scope of the article is simply an introduction for Scala programmers to jump-start into Kubernetes.

Anyhow, there are now two pods active. What is interesting is that the service is the same as before. We configured the service to load balance between all the pods labeled akkahttp. This means we do not have to redeploy the service, but we can replace the single instance with a replicated one.

We can verify this by launching the proxy again (if you are on Windows and you have closed it):

bin/forward-kube-local.sh akkahttp-service 9000

Then, we can try to open two terminal windows and see the logs for each pod. For example, in the first type:

bin/kubectl logs -f akkahttp-deployment-4229989632-mjp6u

And in the second type:

bin/kubectl logs -f akkahttp-deployment-4229989632-s822x

Of course, edit the command line accordingly with the values you have in your system.

Now, try to access the service with two different browsers. You should expect to see the requests to be split between the multiple available servers, like in the following image:

Kubernets in action

Conclusion

Today we barely scratched the surface. Kubernetes offers a lot more possibilities, including automated scaling and restart, incremental deployments, and volumes. Furthermore, the application we used as an example is very simple, stateless with the various instances not needing to know each other. In the real world, distributed applications do need to know each other, and need to change configurations according to the availability of other servers. Indeed, Kubernetes offers a distributed keystore (etcd) to allow different applications to communicate with each other when new instances are deployed. However, this example is purposefully small enough and simplified to help you get going, focusing on the core functionalities. If you follow the tutorial, you should be able to get a working environment for your Scala application on your machine without being confused by a large number of details and getting lost in the complexity.

This article was written by Michele Sciabarra, a Toptal Scala developer.

The 5 Most Common UI Design Mistakes


Although the title UI Designer suggests a sort of departure from the traditional graphic designer, UI design is still a part of the historical trajectory of the visual design discipline.

With each movement or medium, the discipline has introduced new graphic languages, layouts, and design processes. Between generations, the designer has straddled the transition from press to xerox, or paper to pixel. Across these generations, graphic design has carried out the responsibility of representing the visual language of each era respectively.

Therefore, as UI Design makes the transition out of its infancy, what sort of graphic world can we expect to develop? Unfortunately, based on the current trajectory, the future may look bleak. Much of UI Design today has become standardized and repeatable. Design discussions online involve learning the rules to get designs to safely work, rather than push the envelope, or imagine new things. The tendency for UI Designers to resort to patterns and trends has not only created a bland visual environment, but also diminished the value of the designer as processes become more and more formulaic. The issue is precisely not one of technicalities, but of impending visual boredom.

Thus, the Top Five Common UI Design mistakes are:

  • Following Design Rules
  • Abusing the Grid
  • Misunderstanding Typefaces
  • Patterns and the Standardization of UI Design
  • Finding Safety in Contrast

UI Design Rule Book

Understand principles and be creative within their properties. Following the rules will only take your where others have been.

Common Mistake #1: UI Designers Follow the Rules

The world of graphic design has always followed sets of rules and standards. Quite often in any design discipline, the common mistakes that are made can closely coincide with a standard rule that has been broken. Thus, from this perspective the design rules seem to be pretty trustworthy to follow.

However, in just about any design discipline, new design movements and creative innovation has generally resulted from consciously breaking said rule book. This is possible because design is really conditional, and requires the discretion of the designer, rather than a process with any sort of finite answers. Therefore, the design rules should likely be considered as guidelines more so rather than hard and fast rules. The experienced designer knows and respects the rule book just enough to be able break the box.

Unfortunately, the way that design is often discussed online is within sets of do’s and don’ts. Top mistakes and practices for design in 10 easy steps! Design isn’t so straightforward, and requires a much more robust understanding of principles and tendencies, rather than checklists to systematically carry out.

The concern is that if designers were to cease ‘breaking the rules’, then nothing new creatively would ever be made. If UI designers only develop their ability to follow guidelines, rather than make their own decisions, then they may quickly become irrelevant. How else will we argue a value greater than off the shelf templates?

Be Wary of Top Ten Design Rules

The issue with design rules in today’s UI design community is they are so abundant. In the interest of solving any problem, the designer can look to the existing UI community and their set of solutions, rather than solve an issue on their own. However, the abundance of these guides and rules have made themselves less credible.

A google search for “Top UI Design Mistakes” yields a half million search results. So, what are the chances that most, if any of these authors of various articles agree with one another? Or, will each design tip that is discussed coincide accurately with the design problems of a reader?

Often the educational articles online discuss acute problems, rather than the guiding design principles behind the issue. The result is that new designers will never learn why design works the way that it does. Instead, they only become able to copy what has come before. Isn’t it concerning that in none of these sorts of articles is something like play encouraged?

The designer should have a tool kit of principles to guide them, rather than a book of rules to follow predetermined designs. Press x for parallax scrolling and y for carousels. Before choosing, refer to most recent blog post on which navigational tool is trending. Boring!

Trends are like junk food for designers. Following trends produces cheap designs that may offer some initial pay back, but little worth in the long run. This means that not only may trendy designers become dated, or ineffective quickly. But, for you the designer, don’t expect to experience any sense of reward when designing in this way. Although working to invent your own styles and systems is a lot of work, it’s so worth it day in and day out. There’s just something about copying that never seems to feed the soul.

Common Mistake #2: Allowing the Grid to Restrict UI Design

Despite my treatise against rules – here’s a rule: there is no way for a UI Designer to design without a grid. The web or mobile interface is fundamentally based on a pixel by pixel organization – there’s no way around it. However, this does not necessarily mean that the interface has to restrict designers to gridded appearances, or even gridded processes.

Using the Grid as a Trendy Tool

Generally, making any design moves as a response to trends can easily lead to poor design. Perhaps what results is a satisfactory, mostly functional product. But it will almost certainly be boring or uninteresting. To be trendy is to be commonplace. Therefore, when employing the grid in a design, understand what the grid has to offer as a tool, and what it might convey. Grids generally represent neutrality, as everything within the restraints of a grid appear equal. Grids also allow for a neutral navigational experience. Users can jump from item to item without any interference from the designer’s curatorial hand. Whereas, with other navigational structures, the designer may be able to group content, or establish desired sequences.

UI Design Rule Book

Although a useful tool, the grid can be very limiting to designers.

Defaulting to the Grid as a Work Flow

Dylan Fracareta, faculty of RISD and director of PIN-UP Magazine, points out that “most people start off with a 12 – column grid…because you can get 3 and 4 off of that”. The danger here is that immediately the designer predetermines anything that they might come up with. Alternatively, Fracareta resides to only using the move tool with set quantities, rather than physically placing things against a grid line. Although this establishes order, it opens up more potential for unexpected outcomes. Although designing for the browser used to mean that we would input some code, wait, and see what happens. Now, web design has returned to a more traditional form of layout designer that’s “more like adjusting two sheets of transparent paper”. How can we as designers benefit from this process? Working Without a Grid Although grids can be restricting, they are one of our most traditional forms of organization. The grid is intuitive. The grid is neutral and unassuming. Therefore, grids allow content to speak for itself, and for users to navigate at their will and with ease. Despite my warnings towards the restrictiveness of grids, different arrays allow for different levels of guidance or freedom.

Common Mistake #3:The Standardization of UI Design with Patterns

The concept of standardized design elements predates UI design. Architectural details have been frequently repeated in practice for typical conditions for centuries. Generally this practice makes sense for certain parts of a building that are rarely perceived by a user. However, once architects began to standardize common elements like furniture dimensions, or handrails heights, people eventually expressed disinterest in the boring, beige physical environment that resulted. Not only this, but standardized dimensions were proven to be ineffective, as although generated as an average, they didn’t really apply to the majority of the population. Thus, although repeatable detail have their place, they should be used critically.

If we as designers choose to automate, what value are we providing?

If we as designers choose to automate, what value are we providing?

Designers Using the Pattern as Product

Many UI designers don’t view the pattern as a time saving tool, but rather an off the shelf solution to design problems. Patterns are intended to take recurring tasks or artefacts and standardize them in order to make the designer’s job easier. Instead, certain patterns like F Pattern Layouts, Carousels or Pagination have become the entire structure of many of our interfaces.

Justification for the Pattern is Skewed

Designers tell themselves that the F shaped pattern exists as a result of the way that people read on the web.Espen Brunborg points out that perhaps people read this way as a result of us designing for that pattern. “What’s the point of having web designers if all they do is follow the recipe,” Brunborg asks.

Common Mistake #4: Misunderstanding Typefaces

Many designer’s quick tips suggest hard and fast rules about fonts as well. Each rule is shouted religiously, “One font family only! Monospaced fonts are dead! Avoid thin fonts at all costs!”. But really, the only legitimate rules on type, text and fonts should be to enforce legibility, and convey meaning. As long as type is legible, there may very well be an appropriate opportunity for all sorts of typefaces. The UI Designer must take on the responsibility of knowing the history, uses, and designed intentions for each font that they implement in a UI.

Consider a Typeface Only for Legibility

Typefaces convey meaning as well as affect legibility. With all of the discussion surrounding rules for proper legibility on devices etc, designers are forgetting that type is designed to augment a body of text with a sensibility, as much as it is meant to be legible. Legibility is critical, I do not dispute this – but my point is that legibility really should be an obvious goal. Otherwise, why wouldn’t we have just stopped at Helvetica, or maybe Highway Gothic. However, the important thing to remember is that fonts are not just designed for different contexts of legibility. Typefaces are also essential for conveying meaning or giving a body of text a mood.

Typefaces are each designed for their own uses. Don’t allow narrow minded rules to restrict an exploration of the world of type.

Avoiding Thin Fonts At All Costs

Now that the trend has come (and almost gone?), a common design criticism is to avoid thin fonts entirely. In the same way thin fonts came as a trend, they may leave as one also. However, the hope should be to understand the principles of the typefaces rather than follow trends at all.

Some say that they’re impossible to read or untrustworthy between devices. All legitimate points. Yet, this represents a condition in the current discussion of UI design. The font choice is only understood by designers as technical choice in regards to legibility, rather than also understanding the meaning and value of typefaces. The concern is that if legibility is the only concern that a designer carried, would thin fonts be done away with entirely?

Understand why you are using a thin font, and within what contexts. Bold, thick text is actually much more difficult to read at length than thinner fonts. Yet, as bold fonts carry more visual weight they’re more appropriate for headings, or content with little text. As thin fonts are often serifs, its suitability for body text is entirely objective. As serif characters flow together when read in rapid succession, they make for much more comfortable long reading.

As well, thin fonts are often chosen because they convey elegance. So, if a designer was working on an interface for a client whose mandate was to convey elegance, they might find themselves hard pressed to find a heavy typeface to do the job.

Not Enough Variation

A common mistake is to not provide enough variation between fonts in an interface. Changing fonts is a good navigational tool to establish visual hierarchy, or potentially different functions within an interface. A crash course on hierarchy will teach you that generally the largest items, or boldest fonts, should be the most important, and carry the most visual weight. Visual importance can convey content headings, or perhaps frequently used functions.

Too Much Variation

A common UI Design mistake is to load in several different typefaces from different families that each denote a unique function. The issue with making every font choice special, when there is many fonts, is that no font stands out. Changing fonts is a good navigational tool to establish visual hierarchy, or potentially different functions within an interface. Therefore, if every font is different, there is too much confusion for a user to recognize any order.

Common Mistake #5: Under/Over Estimating the Potential of Contrast

A common mistake that appears on many Top UI Design Mistake lists is that designers should avoid low contrast interfaces. There are many instances in which low contrast designs are illegible and ineffective – true. However, as with the previous points, my worry is that this use of language alternatively produces a high contrast design culture in response.

Defaulting to High Contrast

The issue is that high contrast is aesthetically easy to achieve. High contrast visuals are undeniably stimulating or exciting. However, there are many more moods in the human imagination to convey or communicate with, other than high stimulation. To be visually stimulating may also be visually safe.

The same issue is actually occurring in sci-fi film. The entire industry has resorted to black and neon blue visuals as a way to trick viewers into accepting ‘exciting’ visuals, instead of new, creative, or beautiful visuals. This article points out what the sci-fi industry is missing out on by producing safe visuals.

Functionally, if every element in an interface is in high contrast to another, then nothing stands out. This defeats the potential value of contrast as a hierarchical tool. Considering different design moves as tools, rather than rules to follow is essential in avoiding stagnant, trendy design.

Illegibly Low Contrast

The use of low contrast fonts and backgrounds is a commonly made mistake. However, rather than being a design issue. This could potentially be discussed as a beta testing mistake, rather than a design mistake.

How the design element relates as a low contrast piece to the rest of the interface is a design concern. The issue could be that the most significant item hierarchically is low in contrast to the rest of the interface. For the interface to communicate its organizational structure, the elements should contrast one another in a certain way. This is a design discussion. Whether or not it is legible is arguably a testing mistake.

The point is that in only discussing contrast as a technical issue resolvable by adjusting a value, designers miss out on the critical understanding of what contrast is principally used for.

Conclusion

As with the previous 4 mistakes, the abuse of patterns will rarely result in a dysfunctional website, but rather just a boring one. The mistake is in being safe. This overly cautious method of design may not cause the individual project to fail. However, this series of safe mistakes performed by the greater web community can mean greater failures beyond the individual UI design project. The role of the designer should be to imagine, thoughtfully experiment and create – not to responsibly follow rules and guidelines.

The original article is found on the Toptal Design Blog.

Adding maven plugin to eclipse and setting up first project


In my previous post I wrote about basic maven concepts for a beginner. In this post I will try to out line how to install maven for eclipse and creating a maven project in eclipse.

  1. Go to Help->”Install New Software”.
  2. On ->”Install New Software” screen click on “available software sites”.
  3. It will pop up list of download sites, on top is the default for eclipse version say for indigo it’s Juno – http://download.eclipse.org/releases/juno . Select it and click ok.
  4. In drop down associated with “Work with”, the selected link will appear. Click to select the download.
  5. This will show all updates available from the link.
  6. Type maven in the filter box, select the option under collaboration.
  7. Click next and follow screen instructions. It’s DONE
  8. It will need a restart of eclipse. To confirm whether eclipse plugin was success fully installed or not open windows->preferences->Java->BuildPath->Classpath Variables. A variable named M2_REPO must appear. This is the location where all the downloaded dependencies will appear.

  9. Where can I get a sample maven project to get started?
  10. Click Ctrl+N to open new dialog box. Select “Maven project” under “Maven” section

  11. Select a project folder. If it does not exist, maven will create one. DO NOT select the workspace folder only, other wise all files will be directly generated there instead inside a project specific folder.
  12. Clicking on next will ask to select an artifact ID.
  13. In simple terms artifactId represent a type of project to be generated for example skeleton for a simple Java project, A j2ee project, spring MVC project etc. Selecting an artifact ID will generate folder structure for the project along with the POM file which will have all the dependencies defined for that type of project. I have selected Spring MVC

  14. Next Screen will ask for Group Version and ID

  15. This will create the required project. Take a look at the POM.xml, it already contains all the dependencies for the Spring MVC project.