August 21, 2020

16 Tech Experts Choose The Best Network Troubleshooting Tools

Network troubleshooting can challenge even the most experienced IT professionals. New issues can crop up any time a device is added or the network changes, and it can be difficult to determine exactly where the problem is. Fortunately, several effective network troubleshooting solutions can make your job significantly easier.

We asked the members of Forbes Technology Council to share their favorite network troubleshooting tools that every IT pro should know about. Try one of their recommended solutions to help you solve your tech problems faster.

1. Nmap

Nmap is an open-source tool and the Swiss Army Knife of network troubleshooting. It’s basically Ping with superpowers, broadcasting packets to identify hosts, including their open ports and OS versions. This information is integrated into a network map and inventory, allowing analysts to identify connection issues, vulnerabilities and traffic. - Jaime ManteigaVenkon Corp.

2. Netstat

With increasing network complexity comes a need to simplify network management to make IT administrators’ time and input more effective. Netstat (derived from the words “network” and “statistics”) is useful on Unix-like operating systems, including Windows. When dealing with network security, it’s advantageous to be informed about the inbound and outbound connections to your company’s network. - Vikas Khorana, Ntooitive Digital

3. tcpdump

tcpdump is a must-have troubleshooting tool for pros. If they can use it effectively, they can pinpoint network problems quite quickly without affecting unrelated applications. - Vipin Jain, Pensando Systems

4. Ping

Ping is an excellent tool for quickly troubleshooting network problems. It allows you to easily check if a server is down, and it is present in most operating systems. - Ivailo Nikolov, SiteGround

5. TRACERT And Traceroute

TRACERT and Traceroute are invaluable utilities for any IT team. They give detailed insight into the route your data takes and the response time of your intermediate hosts. As anyone in IT can attest to, even the smallest bit of information can help elucidate the problem at hand. For this reason, TRACERT and Traceroute are goldmines when it comes to troubleshooting. - Marc Fischer, Dogtown Media LLC

6. My Traceroute (MTR)

One of the best tools for diagnosing network issues or just exploring network performance is called My Traceroute (MTR). MTR combines the best of Ping and Traceroute into a single tool. It’s a great way to observe both packet loss and latency at the same time. - Cole Crawford, Vapor IO

7. Mockoon

Mockoon is a newer tool that has quickly become invaluable to our engineers. It allows us to create mock APIs and build our front ends against them without needing a backend to work against. By combining Mockoon with Charles, we can even use live APIs in some parts of the system and mock ones in others with very little work needed to switch back and forth. - Luke Wallace, Bottle Rocket

8. Wireshark

Wireshark is one of the best packet capture tools available and is a must-have for network analysis. It is versatile, fast and gives a broad range of tools and filters to identify exactly what’s happening on the network. - Saryu Nayyar, Gurucul

9. OpenVAS

Every IT pro should employ some kind of proactive vulnerability scanning software to detect cyber threats. You’d much rather be troubleshooting potential threats before they enter your systems than trying to fix the damage they caused. I recommend tools like Wireshark and OpenVAS as free, open-source tools that any IT team or pro can use to identify threats to critical data or systems. - John Shin, RSI Security

10. Grey Matter

Grey Matter is the universal mesh. It’s a next-generation 3,4,7 network layer that leverages a C-based proxy for zero-trust security, chain of evidence audit compliance, targeted segmentation and low-level reporting, and it’s open-source friendly. If you are trying to figure out the use cases for a “service mesh,” do your research. What’s in the wild is only scratching the surface. - Chris Holmes, Decipher Technology Studios

11. Linux’s Dig Command

The dig tool in Linux is great for helping to solve the issues of where a site is potentially located, what IPs are associated and if it’s behind a load balancer. This is a modern tool that is mostly underused. - WaiJe Coler, InfoTracer

12. DNS And NS Lookup Tools

DNS and NS lookup tools should be in every IT pro’s toolbox today. Every device we use—from our smartphones and laptops to IoT devices and network appliances—uses IP and DNS addresses. Conflicts between IPs and devices happen all the time on networks. A solid lookup tool can help isolate the offending device and narrow down the troubleshooting steps to take. - Thomas Griffin, OptinMonster

13. Speedtest-Plotter

Speed and agility are vital to productivity, especially with the increase in remote work. Speedtest-Plotter is a great network troubleshooting tool that measures your internet bandwidth using a server close to you. It allows you to track your speed over time (instead of just a single analysis) while identifying relevant changes in connectivity. - Robert Weissgraeber, AX Semantics

14. Batfish

I highly recommend adding network configuration analysis to your troubleshooting toolkit. While Ping can tell you that something is broken, and Traceroute/MTR can tell you where it’s broken, an open-source tool like Batfish can tell you why it’s broken. Better yet, you can use Batfish, or a similar validation tool, to ensure you don’t break anything in the first place! - Chris Grundemann, Myriad360

15. Fiddler

Perhaps I think too much “SaaS” when I think network. With that said, Wireshark and Fiddler are indispensable tools for SaaS network troubleshooting. - Joe Karbowski, FM:Systems

16. New Relic And Pingdom

I would monitor every system from two sides. First, monitor from the system/server itself to the outside world. I can highly recommend New Relic for this. And second, monitor from outside of your data center to the IP of your machine. Here my tool #1 is Pingdom. This two-sided method gives you an instant view of where the trouble is to be found. - Florian Otte, KELLER Group GmbH

This article was published on Forbes.com

August 3, 2020

Implementing a Flutter CI/CD Pipeline in Jenkins: Part 4(Jenkinsfile — Android and iOS/Fastlane)

This fourth and final part of the series, I’ll explain the Android and iOS bits of the Jenkinsfile and what is necessary to distribute to AppCenter.

In the Jenkinsfile, to build an Android Release APK, the section below is all that’s necessary.

stage ('Flutter Build APK') {
      steps {
          sh "flutter build apk --split-per-abi"
      }
}

However, in order for this to be successful, you must confirm that you have the Android SDK (above version 28) frameworks installed on the machine Jenkins is running on. Additionally, you must include the .jks key file necessary for signing the apk in the base directory of the Flutter project and confirm that you are able to build a release APK locally. Steps for this are found in Flutter’s documentation here.

After building, in order for those built files to be of use, the built files must be distributed somewhere so testers can access them. Luckily, AppCenter has a jenkins plugin that you can install that makes this super simple. In order to upload to AppCenter, you need to have setup your AppCenter account, have an API Token, an App in AppCenter for Android, and a Distribution Group for that App as well.

Assuming you’ve completed Step 2 of this series, the AppCenter plugin should already be installed. If not, install it now. Once the AppCenter plugin is installed and Jenkins has rebooted, this next stage will work. Simply fill in your API Token, Owner Name(AppCenter account name, not email), App Name in AppCenter, and Distribution Group. The path to the app section of code should remain the same across all projects, though you may want to distribute multiple apk’s if you’re using devices with different processor requirements.

stage('Distribute Android APK') {
      steps {
          appCenter apiToken: 'APITOKENHERE',
                  ownerName: 'Phtiv08',
                  appName: 'Flutter-Beer',
                  pathToApp: 'build/app/outputs/apk/release/app-arm64-v8a-release.apk',
                  distributionGroups: 'Flutter-Beer-Distribution'
      }
}

Finally, I’ll go over the iOS stages of this Jenkinsfile. The first stage for iOS should require no special handling. It should work without any fancy legwork required. Also, this step doesn’t sign the build, as signing will happen in the archive step handled in Fastlane.


stage('Flutter Build iOS') { steps { sh "flutter build ios --release --no-codesign" } }

The next stage unfortunately is a bit more involved. Because of the complexity of keychain and signing processes for iOS, I leverage the brbuild_ios repo on Bitbucket to create a custom/temporary keychain. This requires a ‘mobileprovision’ in the ios/ios-provisioning-profile-vault folder of your flutter project. It also requires two other files. The first of these is a .p12 file which is created by exporting a saved .cer file from your local machine’s keychain access. The second is a .pass file containing the password to this .p12 file. Both of these files should be in the ios/ios-p12-vault folder of your flutter project. Additionally, you must open Xcode and set the Runner target to use this provisioning profile on your local machine.

After you’ve made your Distribution Certificate, Provisioning Profile, and generated your p12, and pass files and put them where they need to go, you can move on to your fastfile to execute iOS build and distribution commands. Below is the full fastfile used in my pipeline.

fastlane_version "2.102.0"
default_platform(:ios)
 
import("brbuild_ios/fastlane/Fastfile")
 
platform :ios do
 
  build_number = ENV["BUILD_NUMBER"]
  output_name = "FlutterBeer_AdHoc_#{build_number}"
  ipa_name = "./.build/#{output_name}"
  scheme = "Runner"
 
  def kill_simulators
    Action.sh("killall -9 Simulator 2>/dev/null || echo No simulators running")
  end
 
  def setup
    build_setup(certificate_names: ["Flutter_Cert.p12"],
                provisioning_profile_names: ["Flutter_Beer_Ad_Hoc.mobileprovision"],
                should_log: true)
  end
 
  def cleanup
    build_cleanup
    clear_derived_data(derived_data_path: "./dd")
  end
 
  desc "The buildAdHocCore lane builds the FlutterBeer archive in the ad-hoc configuration"
  lane :buildAdHocCore do
    gym(scheme: "#{scheme}", configuration: "Release", output_name: "#{output_name}", clean: true, export_method: "ad-hoc",
        output_directory: "./.build", archive_path: ipa_name, derived_data_path: "./dd")
  end
 
  desc "The uploadToAppCenter lane uploads a pre-built IPA to AppCenter"
  lane :uploadToAppCenter do
      appcenter_upload(
      api_token: "APITOKENHERE",
      owner_name: "Phtiv08",
      owner_type: "user",
      app_name: "Flutter-Beer-iOS",
      file: ".build/#{output_name}.ipa",
      destinations: 'Flutter-Beer-iOS-Distribution',
      destination_type: 'group',
      notify_testers: true
    )
  end
 
  desc "The buildAdHoc lane builds the FlutterBeer archive in the ad-hoc configuration"
  lane :buildAdHoc do
    begin
      setup
      buildAdHocCore
      uploadToAppCenter
    rescue => exception
      cleanup
      raise exception
    else
      cleanup
    end
  end
end

I’d like to go over a couple sections of this fastfile to explain them and their functions. First, the setup step calls a function from the brbuild_ios repo that sets up the temporary keystore with the supplied certificates and provisioning profiles.


def setup build_setup(certificate_names: ["Flutter_Cert.p12"], provisioning_profile_names: ["Flutter_Beer_Ad_Hoc.mobileprovision"], should_log: true) end

The second block I want to go over is the buildAdHocCore lane. This lane builds the iOS Archive and then the IPA to distribute. The ‘scheme’ variable passed in should always be ‘Runner’ for Flutter projects. This is because ‘Runner’ is the name/xcode project auto-generated in the ‘flutter build ios’ command. If you want to do other configurations and export methods, these can be defined in this lane.

desc "The buildAdHocCore lane builds the FlutterBeer archive in the ad-hoc configuration"
  lane :buildAdHocCore do
    gym(scheme: "#{scheme}", configuration: "Release", output_name: "#{output_name}", clean: true, export_method: "ad-hoc",
        output_directory: "./.build", archive_path: ipa_name, derived_data_path: "./dd")
end

Before you attempt to run your fastfile, assuming you’re going to distribute via AppCenter, you should navigate locally to your flutter/ios directory and run the following command and commit the resulting file. The AppCenter plugin can be viewed in more detail here.

fastlane add_plugin appcenter

Next, we run the Fastlane commands defined in the above fastfile. The ‘dir’ block in this step tells Jenkins where to run the fastlane commands.

stage('Make iOS IPA') {
    steps {
        dir('ios'){
                sh "bundle install"
                sh "bundle exec fastlane buildAdHoc --verbose"
        }
    }
}

Because distribution of the iOS build is handled in fastlane, it’s unnecessary to have a distribution stage for iOS in the Jenkinsfile. The last stage of the Jenkinsfile should be to clean your Flutter project. This will ensure that leftover artifacts from failed builds don’t corrupt future new builds.

stage('Cleanup') {
    steps {
        sh "flutter clean"
    }
}

One more parting thought:

If you are getting errors with code ‘65’, that likely means something is wrong with your signing setup. The first thing I would do is confirm that you can build locally with the mobileprovision and certificate that you include in your repo for brbuild_ios to use. Next, I would check your Jenkins machine and ensure that there aren’t any duplicate certificates or provisioning profiles in Keychain Access.

That concludes this series. I hope this helps you create a CI/CD pipeline for your Flutter project!

July 27, 2020

Implementing a Flutter CI/CD Pipeline in Jenkins: Part 3 (Jenkinsfile Setup)

This step assumes you have a Jenkins server initiated and that you have gone through the initial setup steps in parts 1 and 2. Additionally, this step assumes that you are on the MacOS and have correctly installed Flutter, the Android SDK, and all necessary Xcode tools, including command-line tools.

Welcome to Part 3 of a 4 part series on CI/CD with Jenkins and Flutter. In Part 3, we will go over the Jenkinsfile and explain what each part does. Below is the full file and below that I’ll segment out each part and explain what it does. Your Jenkinsfile should be in your Flutter project’s base directory.

def appname = "Runner" //DON'T CHANGE THIS. This refers to the flutter 'Runner' target.
def xcarchive = "${appname}.xcarchive"

pipeline {
    agent { label 'Flutter_v2020_05' } //Change this to whatever your flutter jenkins nodes are labeled.
    environment {
        DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer/"  //This is necessary for Fastlane to access iOS Build things.
        PATH = "/Users/jenkins/.rbenv/shims:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/jenkins/Documents/flutter/bin:/usr/local/Caskroom/android-sdk/4333796//tools:/usr/local/Caskroom/android-sdk/4333796//platform-tools:/Applications/Xcode.app/Contents/Developer"
    }
    stages {
        stage ('Checkout') {
            steps {
                step([$class: 'WsCleanup'])
                checkout scm
                sh "rm -rf brbuild_ios" //This removes the previous checkout of brbuild_ios if it exists.
                sh "rm -rf ios/fastlane/brbuild_ios" //This removes the brbuild_ios from the fastlane directory if it somehow still exists
                sh "GIT_SSH_COMMAND='ssh -i ~/.ssh/ios_dependencies' git clone --depth 1 [email protected]:BottleRocket/brbuild_ios.git" //This checks out the brbuild_ios library from BottleRocket's Bitbucket
                sh "mv brbuild_ios ios/fastlane" //This moves the just checked out brbuild_ios to the fastlane directory for easier importing
            }
        }
        stage ('Flutter Doctor') {
            steps {
                sh "flutter doctor -v"
            }
        }
        stage ('Run Flutter Tests') {
            steps {
                sh "flutter test --coverage test/logic_tests.dart"
            }
        }
        stage ('Flutter Build APK') {
            steps {
                sh "flutter build apk --split-per-abi"
            }
        }
        stage('Distribute Android APK') {
            steps {
                appCenter apiToken: 'API_TOKEN_HERE',
                        ownerName: 'OWNER_NAME',
                        appName: 'APP_NAME',
                        pathToApp: 'build/app/outputs/apk/release/app-arm64-v8a-release.apk',
                        distributionGroups: 'DISTRIBUTION_GROUP'
            }
        }
        stage('Flutter Build iOS') {
            steps {
                sh "flutter build ios --release --no-codesign"
            }
        }
        stage('Make iOS IPA And Distribute') {
                steps {
                    dir('ios'){
                            sh "bundle install"
                            sh "bundle exec fastlane buildAdHoc --verbose" 
                    }
                }
        }
        stage('Cleanup') {
            steps {
                sh "flutter clean"
            }
        }
    }
}

These first two lines are defined constants for use in the rest of the file. In Flutter, the iOS project is called ‘Runner’. This name must not change and must be used in building and archiving of the iOS file.

def appname = "Runner" //DON'T CHANGE THIS. This refers to the flutter 'Runner' target
def xcarchive = "${appname}.xcarchive"

Next, the pipeline block begins, and ‘agent’ is defined. The agent section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins environment depending on where the agent section is placed. The section must be defined at the top-level inside the pipeline block, but stage-level usage is optional. For our purposes, I use a Flutter specific ‘node’ with the label, ‘Flutter_v2020_05’.

pipeline {
    agent { label 'JENKINS_NODE_NAME_HERE' } //Change this to whatever your flutter jenkins nodes are labeled.
...
}

The ‘environment’ block is very, very important. This lets you add things to the ‘path’ of Jenkins, allowing it to know where Flutter, Android, and Xcode command-line tools are. All of which are necessary for our pipeline to run successfully. An additional part of this block is the ‘DEVELOPER_DIR’ line. That line defines a folder that Fastlane looks for to build for iOS.

environment {
    DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer/"  //This is necessary for Fastlane to access iOS Build things.
    PATH = "/Users/jenkins/.rbenv/shims:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/jenkins/Documents/flutter/bin:/usr/local/Caskroom/android-sdk/4333796//tools:/usr/local/Caskroom/android-sdk/4333796//platform-tools:/Applications/Xcode.app/Contents/Developer"
}

The ‘stages’ block defines sections of the pipeline to run. If one stage fails, the entire pipeline build is marked as a failure. The first stage in this pipeline is where the bitbucket project is checked out and set up for use in future stages. This stage also contains steps that clear out the workspace so each time the pipeline runs, it has a fresh/empty folder. Next, there are 4 lines that clear out previously checked out copies of brbuild_ios, then check out a new copy and put it into the fastlane for easier importing. The brbuild_ios repo is very important to this pipeline, as it handles custom keychain generation and usage.

stages {
    stage ('Checkout') {
        steps {
            step([$class: 'WsCleanup'])
            checkout scm
            sh "rm -rf brbuild_ios" //This removes the previous checkout of brbuild_ios if it exists.
            sh "rm -rf ios/fastlane/brbuild_ios" //This removes the brbuild_ios from the fastlane directory if it somehow still exists
            sh "GIT_SSH_COMMAND='ssh -i ~/.ssh/ios_dependencies' git clone --depth 1 [email protected]:BottleRocket/brbuild_ios.git" //This checks out the brbuild_ios library from BottleRocket's Bitbucket
            sh "mv brbuild_ios ios/fastlane" //This moves the just checked out brbuild_ios to the fastlane directory for easier importing

        }
    }
...
}

The second stage executes the ‘flutter doctor’ command, which ensures the Flutter environment on your machine is fully installed and working for both iOS and Android

stage ('Flutter Doctor') {
    steps {
        sh "flutter doctor"
    }
}

Stage 3 is where the Flutter tests, written in dart, are executed. This single line also generates a coverage report.

stage ('Run Flutter Tests') {
    steps {
        sh "flutter test --coverage test/logic_tests.dart"
    }
}

In the next part(Part 4) of this series, I will explain the Android and iOS parts of the Jenkinsfile and all that is necessary for that part of the Jenkins pipeline to build/complete successfully.

July 27, 2020

Tech Pros: Boost Your Skills With These 14 Resources

In the era of social distancing and remote work, those who no longer have travel plans and daily commutes find themselves with extra time on their hands. Many are taking this time to learn new professional skills and brush up old ones. But what are the best resources for tech pros looking for educational content?

We asked 14 members of Forbes Technology Council which online (and offline) resources they recommend other tech pros tap into. Try their recommendations if you’re interested in professional development during your downtime.

1. edX

Having completed a certificate program in data science on edX.org, I can vouch for the quality of both the content and the delivery platform. Most courses are free to audit, and there is an exceptionally wide variety of subjects offered by the world’s top educational institutions. Courses are suitable for everyone from beginners to experts. Learning is essential, and there are many other great resources like edX. - Gerald Morrison, SigmaSense LLC

2. E-Books And Pluralsight

My Kindle has been my best resource. I have tried to read as much as possible—all the things that I had put aside for later. Greater awareness and knowledge makes you a much better professional and a more well-rounded individual. You will quickly discover the skills you need and the actions you need to fill those gaps. I have also found Pluralsight to be a great resource for goal-oriented tech training. - Samiran Ghosh, Rockmetric

3. Coursera

I have personally found a few interesting courses offered on Coursera. There are several different options, not only for professional growth but also for personal growth. Since we invest so much time into our professional lives, I found the lockdown to be a fantastic opportunity to invest more in personal growth, such as time management and even mindfulness. - Amir Kotler, Veego Software

4. Cybrary

With the large number of unfilled positions in cybersecurity and the 40 million jobs that have gone away during the Covid-19 pandemic, there is no better time to make a transition into cybersecurity. Cybrary.it offers free online training, career maps and practice tests to help people get the skills and certifications needed to land a job in cybersecurity. - Terence Jackson, Thycotic

5. Online Conferences

Many small conferences and meetups have temporarily transitioned online, and often they are making their sessions free to watch. Without the burden of traveling, it is easier to carve out a few hours to hear from experts you might never see in person, and you can still gather a plethora of knowledge. - Luke Wallace, Bottle Rocket

6. Public Libraries

Not only is the public library a great source of e-books and audiobooks, but most memberships include a subscription to LinkedIn Learning (previously Lynda.com), where you can brush up on skills like Web design, prototyping and UX. Additionally, as accessible alternatives to online classes, companies like Intercom, IDEO and InVision have podcasts where you’ll find “best-in-class” methods on how to revolutionize the user experience. - Cecile Lee, Trendalytics

7. A Cloud Guru

As businesses move more to the cloud to improve their capabilities and grow, cloud architecture skills are in demand. I have found A Cloud Guru to be an excellent resource for managers to stay up to date on the capabilities of the cloud. Also, hands-on developers can upskill with certification courses that prove the knowledge they have gained. - Glyn Roberts, iTechArt Group

8. Product Hunt

Product Hunt is a great resource for seeing what new apps and tools are being built and what’s trending. I recommend looking at these not necessarily from a user perspective but from a professional point of view to see how they are built, their UX/CX, value proposition, etc. Once you’ve identified what’s working for these products you can further hone your skills and products to match. - Robert Weissgraeber, AX Semantics

9. YouTube

YouTube is one of my greatest resources. It is truly the wealth of human knowledge, in full-motion videos that are searchable and easily consumed. Nothing compares to real people sharing wisdom and opinions for all to see and comment on. It’s what we should rely on as the default means of record-keeping. Nothing else compares. - Tom Roberto, Core Technology Solutions

10. Duolingo

I have been learning a new language on Duolingo since social distancing began. It is a free resource, and the platform is engaging and easy to use. I have found that spending an hour a day allows for significant progress in a relatively short time. I think it is a good idea to have a decent understanding of multiple languages since our world is becoming more interconnected than ever before. - Abishek Surana Rajendra, Course Hero

11. Codeacademy

Codeacademy has been helping me brush up on some of the newer coding languages out there, like React.js and AngularJS. Because I’m no longer involved in the daily development projects of my companies and manage them as a leader and CEO, I want to be sure to stay updated on what’s out there. Plus, it lets me flex my coding muscles a little, which is fun for me. - Thomas Griffin, OptinMonster

12. Volunteer Platforms

Don’t underestimate the value of real-world skill practice. I recommend using online resources to find opportunities to apply your skills for the greater good. VolunteerMatch or United Way can connect tech workers with deserving nonprofits in need of talented volunteers. Donating time and knowledge is a valuable way to apply tech skills to real-world projects and significantly benefit the community. - Shiv Sundar, Esper

13. Class Central

Online resources like edX and Class Central offer free courses from top universities, including Harvard, UC Berkeley and MIT. Taking this time to invest in education and training can have massive returns in the future. From introductory courses on marketing to ancient civilizations, there are tons of opportunities to learn from top institutions in your own home. - Ryan Chan, UpKeep Maintenance Management

14. Udemy

I am a big fan of Udemy. The courses in the platform cover many relevant topics, from finance and marketing to development and design. With a variety of options, one can become a more versatile worker and ultimately contribute to one’s company in a multidisciplinary manner. - Ashwini Choudhary, Recogni

This article was published on Forbes.com

July 21, 2020

Empathetic API Design for Technical Consumers

Follow these five best practices

Design is important in many aspects of development which is especially true for development that drives user experience. We’ve learned that API design can have a profound impact on user interfaces and, thus, user experiences. Poorly designed APIs can lead to awkward, unnatural, or inefficient workflows within a user’s experience, while a well-designed API can mitigate these issues or at least make it clear to consumers what all is technically possible. Ultimately, user interfaces are a representation of the underlying API.

Here are five design, implementation, performance, and security best practices to keep in mind when producing APIs.

Make Your API a Priority

APIs should be designed and developed with simplicity, consistency, and ease of use in mind. Accomplishing these things may be easy in a silo, but the consumer’s view may be drastically different since their needs or wants were likely not taken into account. It’s always important to design and iterate closely with consumers and/or clients before producing any long-term implementation. The best way to do this is by practicing API-first design which is a design methodology focused on collaboration with stakeholders. Continuing with the alternative may result in an API that conforms to the existing system or simply serves as a conduit to the underlying database, which will almost always ignore the client’s workflows. A great analogy exists in the construction world — you wouldn’t build a house and then draft blueprints.

Additionally, by leading with API design, it’s possible to identify API specification format and tooling from the beginning. The API specification should be described in an established format, such as OpenAPI, API Blueprint, or RAML. An established format is likely to have sufficient tooling that clients are familiar with, like Apiary, Redocly, or Swagger Hub. Depending on the time gap between API design and client development, it may be appropriate to consider mocking functionality, which most established tools will have. Mocking is a good way to give prospective consumers a tour of example data while the implementation is built out.

Structure With Resource-Oriented Design

There is a very common architectural style known as REST that has become the de facto standard on API’s for some time now. APIs that are developed using REST architecture are said to be RESTfulIn general, REST provides important and well-known architectural constraints but lacks concrete guidance on authoring API’s. There is an expanded architectural style known as resource-oriented design, satisfying all the constraints of REST, that serves as a good api design reference. if we’re to compare rest to sql, resource-oriented design provides similar normalization properties to REST as 2NF and 3NF do for SQL. Here are a few constraints to adhere to:

  • The API URIs should be modeled as a resource hierarchy where each node is a resource or collection resource
    • Resource — a representation of some entity in the underlying application (e.g., /todo-lists/1/todo-lists/1/items/1)
    • Collection — a list of resources all of the same type (e.g., /todo-lists/todo-lists/1/items)
  • The URI path — resource path — should uniquely identify a resource (e.g., /todo-lists/1/todo-lists/56)
  • Actions should be handled by HTTP Methods

Following these constraints where all practical will lead to a normalized API that’s consistent and predictable. Also, by leaving actionable verbiage out of URIs, it’s easier to ensure that every resource path is a representation of some entity. This is why verbs are frowned upon in the URI as they are likely not a representation of some underlying entity — /activate is likely not a resource.

As far as the resources themselves, there is no universally accepted answer on the format used to represent them. JSON is widely adopted and understood by the majority of platforms; however, XML, or other formats may serve consumers just as well or better in certain situations. The key thing is to represent resources in a way that is quick and easy for consumers to understand.

Representing resources in this fashion enables them to speak for themselves, known as self-descriptive resources. Self-descriptive resources that are documented using established tooling and open standards will build a sense of trust with the consumer. Ultimately, consumers should buy in to what the API is selling without additional fluff material.

Semantics with HTTP and REST

In order to make an API come to life, it needs to be actionable, especially since it will likely be used to inform a client’s user interface — the buttons need to do something. Actions, in the context of REST, should be fulfilled by HTTP methods, each with an intended purpose which is described here:

  • GET — query/search for resources, expected to be idempotent and, thus, cacheable
  • POST — most flexible REST semantic, any non-idempotent actions excluding deletes should be handled by this method
  • PUT — used to modify entire resources, yet still expected to be idempotent
  • PATCH — used to make partial modifications to resources, yet still expected to be idempotent
  • DELETE — removes a resource from the API, should be idempotent from the caller’s view

Additionally, the result of each action should be returned to the client with a proper status code as defined by the HTTP standard — not all of them but most likely more than you think. Here are some common statuses that will likely be required on any API project:

  • Successful response (200 – 399)
    • 200 — responses with a body for everything besides a creation action
    • 201 — responses for creation actions
    • 202 — responses for a long running process
    • 204 — responses that don’t require a body
  • Client errors (400 – 499)
    • 400 — invalid or bad request, appropriate for syntactic errors
    • 401 — unauthenticated request due to missing, invalid, or expired credentials
    • 403 — insufficient permissions (e.g., wrong Oauth scope, requires admin privileges)
    • 404 — resource not found (e.g., represented entity does not exist in database)
    • 409 — resource conflict (e.g., resource already exists)
    • 422 — request is syntactically valid but not semantically valid
  • Server errors (500 – 599)
    • 500 — classic internal or unknown errors for modeling exceptional/unrecoverable errors, sensitive errors, or errors that can’t be elaborated on
    • 501 — method not implemented
    • 503 — server unavailable
    • 504 — timeout

For clarity, an idempotent HTTP method can be called many times with the same outcome. Consumers should be able to understand the information, relationships, and operations an API is providing by the resources and methods on them alone. For example, a GET method that creates or PUT that deletes a resource will lead to an unpredictable API fraught with unexpected side effects. Proper HTTP method and status codes — which naturally includes constraints such as idempotence — used in conjunction with self-described resources, are nothing more than factual representations of the underlying business domain.

Maintaining Performance at the API Layer

Building a fast-performing service requires careful thought and design, often across multiple technical boundaries. The list of things to consider for performance can range from proper use of concurrency in the application on down to adequate database indexing — it just depends on the requirements and needs of the business. The API can uphold performance characteristics of the underlying system in multiple ways. Here’s some to consider:

  • Asynchronous Server Code — Resources that are an aggregate of multiple independent data sources can be built by the result of multiple async execution results.
  • Non-blocking Implementation — APIs that access cold I/O, are CPU-intensive, or just naturally slow should return with a 202 whilst processing and instruct the client where to get the result. Websockets and callbacks may be appropriate in certain situations as well.
  • Caching — Resources that change relatively slow can be cached — idempotent GET requests are the low-hanging fruit. Service-level caching may suffice, but a CDN may be needed depending on load.
  • Paging — Collection resources should be outfitted with paging controls for the client to limit results
  • Statelessness — Keeping track of state across requests can be complex and time consuming for the server. Ideally, state should be managed by the client and passed to the server. This applies to authentication/authorization as well. Credentials should be passed on each request, JSON Web Token (JWT) is a good option.

These considerations will go a long way toward meeting satisfactory performance metrics set by the business. Usually, business satisfaction is going to be directly tied to the satisfaction of its consumers — their satisfaction with the user experience and, thus, backing APIs. There has been plenty of research to show that user experience is tied to the response time of a page. Network latency plays a big factor in page load time, so it’s important to reduce this time as much as possible. A good rule of thumb is to keep API response time between 150 and 300 milliseconds, which is the range for average human reaction time.

Securing Your API

Personal and financial information is prevalent on the internet today. It has always been important to safeguard this information. However, there have been many notorious data breaches in recent times that bump security considerations from just another thing to project nonstarters without them. There are two good rules of thumb when it comes to API security — don’t embarrass yourself, and don’t reinvent the wheel. It’s best to leverage open standards, such as OAuth or OpenID, which both cover most authentication flows. It’s also advisable to delegate identity matters to purpose-built identity providers, such as Auth0, Firebase, or Okta. Security is a hard thing to get right, and the aforementioned vendors have solved this challenge, plus gone the extra mile or two. Regardless of the standard and/or provider used, it’s always important to apply proper access controls to API resources. Resources that are sensitive should be locked down with appropriate credentials, and a 401 should be returned when these credentials are not provided. In cases where a given user does not have adequate privileges to a resource, a 403 should be returned.

API-First Appraoch

The best practices highlighted above are established practices in the industry that should help with any API project. By taking an API-first approach, you will be on the right path to fostering trust with your consumers and stakeholders. Practicing established REST structure and semantics as well as proper security will be huge in your endeavor. Maintaining performance will keep consumers and customers coming back for more. Ultimately, API design is part art and science. Each API will be different, and there may be some pragmatic decisions to be made. However, it’s critical to not stray too far off the beaten path. It’s important to remember that your data and information is what your consumers and customers are truly seeking, not your radical new API design. Following these best practices will get this information to your all-important customers while keeping your consumers informed all along the way.

This article was originally published in MissionCriticalMagazine.com.

© 2020 Bottle Rocket. All Rights Reserved.