Snowsuit Zine // issue 08

Table of Contents

The Age Of Some Automation

Technological innovation has been a constant in society for the last few centuries. Humans go from one idea to another to another, and make as much progress as they can figure out. The returns for their efforts eventually start to diminish, something else distracts them, and the process starts again.

The pace is remarkable. Steam powered trolley's came about in 1769. Gasoline powered cars happened in 1886. Powered flight in 1903. Less than 70 years after that, humans had reached space and landed on the moon.

First, there is an initial breakthrough, like trains or space travel. This changes what humans accept as possible and creates new foundations from which to think creatively, and so subsequent breakthroughs occur. One breakthrough: Powered flight. Another: electricity. Simultaneously, understanding of physics was getting better. They all contributed knowledge that helped get humans in space and on another planet.

We see the same in industry. Rockefeller's kerosene empire benefited greatly from the existence of a robust train network, which shipped his product across the US, giving his company spectacular reach. Modern telephone companies benefit greatly from space travel because they can put communications satelites in orbit.

Some have speculated that the next revolution will be about automation, eg. machines interacting with the real world. Doesn't take too much imagination to go from where things are today to a future with fuel-efficient, self-driving cars on the streets, drones flying through the air, and robots performing mundane physical tasks. Stretch the imagination further and one can think about self-driving cars safely traveling at 300mph, or drones safely transporting humans through the air, or robots performing complicated, intelligent tasks.

The potential for a future where robots can perform significant amounts of work, work currently done by humans, has caused some to wonder if society should prepare for a post-work society, where the rules of capitalism break down. The idea is that robots can do so much work that humans no longer need to do work themselves. Industry, in its ruthless pursuit of efficiency, will replace humans where ever possible. Self-driving trucks and taxis will replace humans, etc. One can't help but wonder how far it will go, thus the term "post-work".

To reach a post-work society, several things would have to happen. The age of automation would need to live up to the hopeful promise of rendering most work something a machine can just do. This would remove jobs from pool of things the humans do. Then, the pool of jobs would need to grow by less than what is removed.

The relationship is simple. If a tech revolution removes more jobs than it creates and do so in a significant way, a post-work society could exist.

The current revolution is the Information Age and it is building out infrastructure around us in the form of computers and communication networks. The US Bureau Of Labor Statistics for 2014 lists 3.7m people with computer related jobs. In that 3.7m there are 121,000 Web Developers. 365,000 Network and Systems Admins, and 528,000 Systems Analysts. The computer industry didn't exist 50 years ago, so these jobs are new.

It's not clear how many jobs were lost because of the computing industry. One can, however, speculate about the appetite of the industry through some observations. For example, Amazon's market cap recently passed Walmart's. More interesting, though, is that Amazon employs 154,100 people and Walmart employs 2.2 million. In the current age, a software company that provides a similar service to a brick & mortar company can provide the service with an order of magnitude less people. This multiplier effect has been contained to the digital world and robotics represents an opportunity to achieve in the physical world too.

According to a 2013 study from Oxford, automation may claim as many as 47% of current jobs by 2033. The study describes two phases that take place over the next 20 years. The first phase replaces people in the obviously replaceable fields, like transportation, production labor, and administrative support. The second phase will depend on whether or not good AI is developed. The type that could make decisions in the context of science, engineering, art, or people management.

Pew Research Center asked 1896 experts how they feel about the potential for automation. 48% believe the future will displace more blue and white collar jobs than it creates. The other half, 52%, believe it will create more jobs than it displaces.

According to the Pew study, the leading reasons to be positive about what lies ahead are:

  1. Advances in technology may displace certain types of work, but historically they have been a net creator of jobs.
  2. We will adapt to these changes by inventing entirely new types of work, and by taking advantage of uniquely human capabilities.
  3. Technology will free us from day-to-day drudgery, and allow us to define our relationship with “work” in a more positive and socially beneficial way.
  4. Ultimately, we as a society control our own destiny through the choices we make.

From the same Pew study, the leading reasons to be negative are:

  1. Impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well.
  2. Certain highly-skilled workers will succeed wildly in this new environment—but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
  3. Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.

The study shows a roughly 50/50 split towards optimism and pessimism. Is it possible the experts are projecting their optimism or pessimism for humanity's future into their expectations for technology? CBS ran a poll where Americans were shown to be about 50/50 about the future. Statista ran a similar survey that asked Americans if they are consider themselves an optimist. 50% said they do and the others leaned towards pessimism. Maybe a 50/50 split about the future is to be expected, disirregardless of the topic.

If we define a post-work society as one where humans no longer work, a post-work society seems unrealistic. The Oxford study, which claims half of jobs will be replaced by automation, isn't pessimistic enough in its projections to create this scenario.

Maybe something between the optimists and the pessimists is more accurate. Perhaps 20% of jobs are gone and don't come back and instead of "post-work" we get "less-work". Less work could mean a future with more vacation time. Parents spend more time with their kids. Engineers spend more time thinking. Artists make more art. Would be nice.

Articulations

Principles of Scalable API Design

The internet is consolidating around few but large services. If one is lucky, their service might be one of these. The transition from being a young and innocent service to a mature piece of software that people depend on can be painful. Many services need to update their infrastructure, but in the worst case the API could make it impossible to scale up without modifying the semantics. If the service is popular it is likely that many users have written programs that make use of the public API. Modifying the semantics of an API often requires all uses of the API to be rewritten and depending on the service this can be expensive. Luckily, there is no reason to not build scalable APIs for a service because the principles behind making one are well understood.

A scalable API is one in which the addition of more resources, such as RAM, CPU or disk storage, increases the number of calls the API can process without changing the semantics of the API.

In making a public API for a fledgling service, the first inclination of many is to write something that works in order to ship and worry about making it right later. This could be because the author doesn't know how to make a scalable API or is concerned that spending time building a scalable API will be distract from delivering. Finally, some developers confuse the semantics of the infrastructure they are developing on top of with the semantics their API. However, it only takes a few principles to write scalable APIs. While it might take a little bit longer to develop a scalable API, a poor API can be a significant source of maintenance cost in the future. There are at least three principles that an API author should be aware of:

  1. Do not define the semantics of the API by the existing infrastructure.

    Many projects start out on a database with ACID guarantees. This does not mean that the API needs to offer ACID guarantees. The semantics of an API should reflect what the API does, not its implementation. In general this will be giving the API weaker semantics than infrastructure on which the service is implemented. For example, accurately counting is easy in an RDBMS, but at scale can be an expensive computation. A video service's API that shows view counts should, therefore, state it is returning an approximate value even if it is currently capable of returning an exact value. Similarly, guaranteeing all users in a social media timeline see the message at the same time is easy when one has transactions, but with a million followers is not possible.

  2. Allow eventually consistent reads whenever possible.

    Most services are read-heavy and caching becomes the backbone of any scaling strategy. With caching, the reads will be stale but eventually be up-to-date as they expire. For example, a Twitter timeline would be expensive to compute every time a user loaded it. Instead, a cached version could be stored that is updated on some event and cached. In fact, this is how Twitter works using a write-based fan out.

  3. Make writes idempotent.

    Writes should be done in multiple phases, as in issue01. In issue01 this was given as a way to maintain correctness in the face of failure. Idempotence is important at scale because at any given point in time there is likely a failure happening. For example, when making a purchase online one API call could take the intended purchase amount and verify it is valid returning a unique cryptographically signed token. That value could then be used to perform the purchase. The token can be used to check that the purchase has been validated and record the token, making the operation idempotent. The only state the second call needs to track is if it has seen tokens before, despite the surrounding operations being more expensive.

Of course, each API has its unique constraints, but keeping the above three rules in mind means one can create an API that has a good chance of staying stable as the service grows. A stable API means that the organization can focus on growing and innovating rather spending precious developer time on maintenance. This is especially true if one is developing a service that is a backend for other services, for example an ecommerce provider. A backwards breaking change can become prohibitively expensive, however the cost of maintaining the API increases as the service becomes more popular.

There is a lot of variation that can happen inside the principles shown above. In a multi-phase write, for example, it might be fine to let the client create a unique token. When designing APIs, the focus should be on minimizing the long-term costs. This may be a slow process at first but with practice one will discover that most APIs are small variations on existing ones and creating a high-quality API will become a swift and straight forward process.

Monthly Consumption

Books

  • Use of Weapons by Iain M. Banks (link)
  • The Elements of Typographic Style by Robert Bringhurst (link)

Papers

  • On the folly of rewarding A, while hoping for B by Steven Kerr (link)
  • Do Some Business Models Perform Better than Others? A Study of the 1000 Largest US Firms, by Peter Weill, Thomas W. Malone, Victoria T. D’Urso, George Herman, Stephanie Woerner (link)
RSS