January’s jobs report was so good that The Atlantic declared it to be ‘without a blemish.’ Job creation remained strong and wages grew as well. I agree that there was a lot to like about it, but it also strikes me that the enthusiasm it’s generated is a bit unsettling because it shows me how far our expectations have been diminished.

At first glance, job growth looks quite torrid. The New York Times noted that “Since Nov. 1, employers have hired more than one million new workers, the best performance over a three-month period since 1997.” This sounds a bit better than it is: we need to keep in mind that there are also a lot more Americans of working age now than there were in the 1990s, so we need to generate more jobs just to hold steady.

To see this, I used FRED to graph monthly job growth as a percentage of the working age US population:

Screenshot 2015-02-09 16.53.00

This graph shows a steady and encouraging upward trend since the end of the recession, but it also clearly shows that the rate of job creation has been well below what we experienced during the non-recession years of the 1980s and 90s.

And even though the jobs news is good and getting better, labor force participation remains quite low by historical standards, and has been generally declining even since the end of the great recession:

Screenshot 2015-02-09 16.57.00

This decline seems to have leveled off over the past year, but there’s not much evidence yet that it’s reversing itself.

The more underwhelming trend is that of wage growth. Here’s year-over-year wage growth for the past few years:

Screenshot 2015-02-09 17.01.13

Here again, we see an uptick over the past year, but  only a small one.

Data for front line workers goes back a bit farther, and shows that wage growth is significantly lower than it’s been in the past. These workers have also seen a pretty sharp wage growth decline over the past few months:

Screenshot 2015-02-09 17.05.28

So while the January employment news is unquestionably good, there’s still a lot of room for it to get better. When I look at the participation and wage growth rates, I get the impression there’s still a lot of slack in the labor force.

Today we got the happy Screenshot 2014-12-12 13.45.22news that The Second Machine Age was named one of the best books of the year by Bloomberg. Their methodology for putting together their list was interesting: they just asked a lot of heavy hitters in finance, industry, and public affairs what their favorite books were.

I’m flattered to see what our book was among the most cited. Mohamed El-Erian, Dominic Barton, Jeff Sachs, Tim Adams, and and Pippa Malmgren all gave it a shout-out, which is a great feeling.

The Second Machine Age hit the bestseller lists, but it hasn’t been a runaway commercial hit. We’re very pleased with the sales, but there haven’t been millions of them (at least not yet, he wrote hopefully…).

If you can’t have millions of readers you’d at least like to have highly influential ones, and as the Bloomberg list indicates we do, I think. I’ve already dropped enough names for one post, so let me just say that I’ve been floored by the number of very senior and/or very smart people who have had good things to say about 2MA.

It’s said about the Velvet Underground that only a few thousand people bought their albums, but every one of them went out and started a band. If we and 2MA can have that same kind of influence — if we can inspire people to change how they’re running their companies, thinking about policy, educating their students, and so on — the book will have succeeded beyond my wildest dreams.

Enterprise 2.0, Finally?

Facebook’s recent announcement that it’s readying a version of its social software for workplaces got me thinking about Enterprise 2.0, a topic I used to think a great deal about. Five years ago I published a book with that title, arguing that enterprise social software platforms would be valuable tools for businesses.

The news from Facebook, along with rapid takeup of new tools like Slack, the continued success and growth of Salesforce’s Chatter and Yammer (now part of Microsoft), and evidence of a comeback at Jive, indicates that the business world might finally be coming around to Web-style communication and collaboration tools.

Why did it take so long? I can think of a few reasons. It’s hard to get the tools right — useful and simple software is viciously hard to make. Old habits die hard, and old managers die (or at least leave the workforce) slowly. The influx of ever-more Millennials has almost certainly helped, since they consider email antediluvian and traditional collaboration software a bad joke.

Whatever the causes, I’m happy to see evidence that appropriate digital technologies are finally appearing to help with the less structured, less formal work of the enterprise. It’s about time.

What do you think? Is Enterprise 2.0 finally here? If so, why now? Leave a comment, please, and let us know.

A while back I set up autopayment on the Citi credit card I used for business expenses, and it’s been working fine. Recently, however, I ran up so many travel expenses in a month that I hit my credit limit (the clearest sign yet that I’ve been on the road too much). So in order to keep further charges from being declined, I went to the Citi credit cards site to make a manual payment.

Screen Shot 2014-11-09 at 3.18.54 PM

I wanted to use the same bank account for this manual payment that I use for my automatic one. But I couldn’t see how to do that, even after looking around the site for a while. The ‘MAKE A PAYMENT’ button was prominent enough, but clicking on it didn’t take me to a page where I could see and select the bank account I use for autopay. Instead, it took me to a form I’d use to enter bank account information from scratch.

This didn’t seem right, but I couldn’t see what else I could do to get to my autopay information. So I launched a chat window and started a conversation with Ryan. Who was very professional and as helpful as he could be. Which was not at all.

Screen Shot 2014-11-09 at 3.25.06 PM

At this point I got curious.

Screen Shot 2014-11-09 at 3.26.47 PM

And here the runaround began…

Screen Shot 2014-11-09 at 3.29.29 PM

I probably should have let Ryan off the hook at this point — after all, he wasn’t responsible for Citi’s user experience, but I didn’t let it go:

Screen Shot 2014-11-09 at 3.31.43 PM

At some point, Ryan / Citi finally offered a justification. It was that old standby, security:

Screen Shot 2014-11-09 at 3.33.54 PM

That didn’t make a lot of sense to me

Screen Shot 2014-11-09 at 3.36.56 PM

While this interaction was taking place, I went to the site of the credit card I use for personal expenses. That one’s also paid automatically. And when I clicked on the ‘make a payment’ button there, it immediately gave me the option to use my autopay bank account to make that manual payment.

As if they could sense what I was doing, Ryan / Citi then pivoted to a different explanation:

Screen Shot 2014-11-09 at 3.41.05 PM

I was fascinated by this interaction and ready to keep it going to see what they’d trot out next, but Ryan / Citi threw in the towel:

Screen Shot 2014-11-09 at 3.43.01 PM

As I say, I felt bad for Ryan throughout. He was being asked to defend a lazy and thoughtless Web design that he had no role in creating. I only kept the conversation going because I was interested to see what justifications the company would trot out for making its customers do unnecessary busywork.

Ryan, I’m sorry I kept the chat going so long. Citi, I think you owe me and your other customers an apology. And probably Ryan, too.



Screenshot 2014-10-20 13.21.32I’ve been involved with the Boston Book Festival since Deborah Porter founded it in 2009, and it’s become one of my favorite events of the year. And since I had a for-real mainstream published book come out this year (as opposed to a self-published glorified pamphlet) I get to participate this year as a full-fledged author in the session titled “Technology: Promise and Peril

What makes this especially exciting to me is the fact that I’ll share the stage with Nick Carr, who’s one of my favorite writers and thinkers about technology. I don’t praise Nick because I agree with him so often. Over the years, in fact, we’ve pretty reliably argued about some big questions, including whether IT matters for competitive differentiation and whether Google makes us stupid.

This time around promises to be no different. Nick’s new book The Glass Cage made me think a lot, but what I usually thought was “I don’t agree with that.” I do think that today’s breathtaking technological progress is bringing some serious challenges along with it, but they’re not the ones that Nick highlights.

To hear very different views on tech’s promise and peril, I suggest that you come to our session this Saturday at 11 in the Old South Sanctuary on Boston’s Copley Plaza. It’ll also feature as a panelist David Rose, whose new book Enchanted Objects: Design, Human Desire, and the Internet of Things is, for obvious reasons, on my short-term reading list. WBUR’s Sacha Pfeiffer will moderate.

Hope to see you there…

Yesterday we got the good news that The Second Machine Age had been shortlisted for the FT and McKinsey Business Book of the Year Award. Erik and I are floored and very flattered, and looking forward to the award dinner in London in November. I’m pretty sure we’ll watch Thomas Piketty another author hoist the trophy, but it’ll be great fun to attend.

Screen Shot 2014-09-26 at 2.27.50 PM

In a nice coincidence, next week Erik and I are also giving our first joint public talk about the book since the initial book tour. It’s in Harvard’s gorgeous Sanders Theater on Wednesday October 1 at 4 pm. The event is sponsored by Harvard’s Institute for Learning in Retirement, and is free and open to the public. Please get a ticket in advance by stopping by HILR or the Harvard box office.

If you’re in the Boston area and interested at all please do come and let us know what questions you have about technological progress and how it’s shaping our businesses, economies, and societies. We hope to see you there…


mooseMy MIT colleague David Autor delivered a wonderful paper at the recent Jackson Hole Economic Policy Symposium about American job and wage patterns in recent decades, and their link to the computerization of the economy. I’ll say more later about his paper, which was one of the highlights of the event for me (sighting this moose was another one). For now I just want to highlight one graph that he included, and draw a couple additional ones.

Autor included a chart very much like the one below, which tracks US corporate spending over time on digital stuff — hardware and software — as a percentage of GDP:

The most striking pattern in this graph is the sharp increase in the late 1990s, and the steep falloff since then. We’re spending just about a full percentage point of GDP less on IT than we were fifteen years ago. This seems like a compelling prima facie case for believing that IT’s impact on the economy and the labor force should be less than it was before the turn of the century.

And this is what Autor believes. As he writes

the onset of the weak U.S. labor market of the 2000s coincided with a sharp deceleration in computer investment—a fact that appears first-order inconsistent with the onset of a new era of capital-­labor substitution.

I completely agree with him (based largely on his very convincing work) that other factors have strongly shaped the US economy and labor force since the 2000, particularly the emergence of China as an offshoring and manufacturing powerhouse. But I’m not so sure that the impact of digital technologies tapered off to the extent the graph above would have us believe.

To see this, let’s break out the data on software. Information processing equipment is simply a vehicle for software, in much the same way that a bottle of 5-hour energy is a delivery system for caffeine. Hardware runs software, in other words, and it’s software that runs things.

It’s easy to lose sight of that fact in an era of gorgeous devices like Apple’s, but without the apps my iPhone is just a… phone. It’s software that is ‘eating the world,’ to use Marc Andreessen’s memorable phrase.

So how has software spending held up? Pretty well:

There was a slight dropoff after the dot-com bubble burst and the Y2K fiasco passed, but we’re back near the all-time software spending peak. It’s true that this spending has been pretty flat for the past fifteen years, but we should keep in mind that this is also the time when open source software and the cloud and everything-as-a-service burst on the scene. All of these development have significantly lowered the bill for a given level of enterprise software capability, so I look at the graph above as pretty good evidence of constantly increasing demand for software, even though spending has remained constant for a while now.

The ascendancy of software can be seen in a graph of its share of total IT spending over time:


Software now accounts for over half of all IT spending. As Moore’s Law, volume manufacturing, and the cloud continue to drive down the costs of hardware, I expect software’s share of total spend to continue to rise steadily.

I don’t know what’s going to happen to total IT investment as a percentage of GDP going forward. It does feel to me like a sea change is taking place — that it’s getting so much cheaper to acquire digital technologies that even if demand for them rises strongly in the future total spending on them might not (or as an economist would put it, the price effect might be greater than the quantity effect).

So even if the first graph above doesn’t greatly change its shape in the years to come, I won’t take that as evidence that the digital revolution has run its course. Will you?


I am sorry to brag, but this really is an all-star lineup. If you’re at all interested in technological progress and its implications for our businesses, economies, and societies, you should attend the 2014 Second Machine Age conference.

MIT Second Machine Age conference It’s being held on September 10 and 11 at the gorgeous MIT Media Lab building, and organized jointly by the Institute’s Industrial Liaison Program and the Initiative on the Digital Economy (which I cofounded with Erik Brynjolfsson). Erik and I are both speaking, but that’s not the the exciting part (sorry, Erik). What’s truly exciting is the group of people who have agreed to join us and share their latest work and thinking. Where else can you, in the space of two days, hear from:

  • LinkedIn cofounder Allen Blue
  • DARPA’s Gill Pratt
  • and MIT heavy hitters Sinan Aral, Bill Aulet, David Autor, Cesar Hidalgo, Joi Ito, Fiona Murray, Sandy Pentland, Julie Shah, Scott Stern, Deb Roy, and Daniela Rus?

Nowhere, that’s where.

Most of the seats at the conference are reserved for ILP and IDE members, but there are a few available to the general public. The ILP has graciously given our social media followers a special code to use when registering for the conference. So if you’d like to attend (I can’t fathom why you wouldn’t), enter “2MA150” during general registration and you’ll get discounted admission. There are only a few of these discounted seats available, so sign up now!

Hope to see you on campus in September…

It’s been a while since I posted data on US employment trends, so here’s a chart created with FRED’s snazzy new graphing interface. It shows the employment rate (in other words, 100 – the standard unemployment rate) in blue, the employment-to-population ratio (the % of working-age people with work) in green, and the labor force participation rate (the percent of working-age people who have work or are actively looking for it) in red.


This graph clearly shows a very steady up-then-down trajectory in the red line — of the labor force participation rate. It’s affected very little by recessions (the gray bars in the graph), and instead appears to be responding to deeper forces.

The most obvious of these forces are the demographics of the American labor force. Labor force participation went up a lot in the last two decades of the 20th century largely because women entered the workforce in large numbers. It’s pretty clear that one of the reasons it’s going down now is that lots of baby boomers are retiring.

So is retirement the main reason that the red line is going down these days? There’s a lot of debate and discussion on this topic, nicely summarized in this WaPo Wonkblog post by Brad Plumer, and not much agreement. One study estimated that retirement accounts for about 25% of the drop in the labor force participation rate since the recession’s end, while another says that it’s more than 50%.

I’m more persuaded by the lower figure. As Plumer points out, for example, the participation rate for workers 25-54 years old has been declining steadily in the new century, and these folk are clearly not retiring yet:


Also, disability claims started spiking right around the year 2000, and have almost doubled since then:


So it feels to me like something else is going on, in addition to the graying of the US workforce — some other forces that are causing more and more people in recent years to go to school, stay in school, go on disability, get discouraged and stop jobhunting, stay home to raise kids or take care of a sick or elderly loved one, or do any of the other things that means they’re no longer categorized as ‘working or looking for work.’

As I’ve argued many times, here and (with Erik Brynjolfsson) in The Second Machine AgeI believe progress in all things digital is one of these forces, and one that will only become more powerful over time. The evidence is pretty clear that tech progress has been hollowing out the middle class for a while now, and has recently started to affect an even broader set of workers. As computers, software, and robots can do more and more we need some kinds of workers less and less. This is something that would cause more people over time to stop participating in the workforce, and so make the red line up top continue to trend downward over time even if the blue one heads up.

It would be great if the red line reversed its course in the coming months, but I don’t see that happening. Do you?

My last post here took on Zeynep Tufekci and, by extension, others who believe the current trend of using robots and other forms of advanced technology for caregiving is, as she put it, “an abdication of a desire to remain human, to be connected to each other through care, and to take care of each other.”  I wonder how these self-appointed guardians of our humanity feel about the new iPhone app that provides automated diagnoses of imminent mood swings for people with bipolar disorder.

Screen Shot 2014-08-06 at 7.27.27 PM

I love this technology, for the reasons nicely enumerated in this Slate article by Aimee Swartz. Bipolar disorder is common – it affects almost 6 million American adults — and can be very hard to live with, both for people with the condition and for those around them. None of my loved ones have it, thankfully, but I’ve watched families I know well suffer greatly as they try to help one of their members cope with the illness.

Many of the people with it would like to moderate their severe mood swings, but this can be hard to do because as Swartz explains “once a manic episode has begun, the patient often becomes unreceptive to recommendations to seek medical care.” So it’s important to get to then as early as possible during the transition to that episode, when they’re still receptive. 

This is where PRIORI, a technology under development by Prof. Melvin McInnis and his colleagues at the University of Michigan, comes in. With informed consent and in accordance with privacy rules, PRIORI collects the patients’ sides of their smartphone calls, then analyzes these sound files using machine learning techniques to figure out which characteristics of speech are most strongly related to the onset of a manic episode. Importantly for privacy, the algorithms don’t track or analyze what patients are saying, only how they’re saying it.

This might well be enough. In an small initial trial involving a half dozen people with relatively rapid mood swings, the app was successful at predicting when one was coming 65% of the time. This is pretty good, and as Swartz points out it’ll only get better with more time and more data.

McInnis explains why this is so important: ““The classic way we have of monitoring individuals with bipolar disorder is what I call 18th-century medicine. People see their doctors, chat how their energy levels are during the day, how they’re sleeping at night, how they’re getting along at home and work to gauge how they’re going… [W]e [currently] diagnose an episode when it’s already started, and it’s already causing all kinds of problems.”

In other words, our current, human-centric system of taking care of these highly vulnerable people isn’t working. So what would work better? It looks like a small, unobtrusive piece of technology might work pretty well here, and improve caregiving via automation.

Anyone got a problem with that, or want to make a case for why taking care of our mentally ill loved ones should remain human-centric and low tech? If so, I’m all ears.