Sabtu, 01 Agustus 2015

@ PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin

PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin

Just how is making sure that this Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin will not shown in your bookshelves? This is a soft data publication Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin, so you could download Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin by buying to obtain the soft documents. It will relieve you to read it every time you require. When you really feel lazy to relocate the published book from home to office to some location, this soft documents will certainly relieve you not to do that. Since you can only save the data in your computer hardware and also gizmo. So, it allows you review it almost everywhere you have readiness to review Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin

Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin

Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin



Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin

PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin

Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin. Learning how to have reading behavior resembles learning to try for consuming something that you truly don't want. It will certainly need more times to assist. Additionally, it will certainly additionally bit make to offer the food to your mouth and swallow it. Well, as reviewing a book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin, in some cases, if you must check out something for your brand-new tasks, you will really feel so dizzy of it. Also it is a book like Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin; it will certainly make you feel so bad.

Postures now this Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin as one of your book collection! But, it is not in your bookcase compilations. Why? This is guide Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin that is provided in soft file. You can download and install the soft file of this magnificent book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin currently and in the web link offered. Yeah, different with the other people that try to find book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin outside, you can get easier to posture this book. When some people still walk right into the store and search the book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin, you are right here only stay on your seat and also get the book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin.

While the other individuals in the shop, they are not sure to discover this Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin directly. It might need more times to go store by establishment. This is why we expect you this website. We will certainly offer the best method as well as referral to get the book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin Even this is soft documents book, it will be ease to carry Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin anywhere or save in the house. The difference is that you may not need move guide Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin area to place. You might require just duplicate to the other devices.

Now, reading this incredible Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin will certainly be easier unless you obtain download the soft file right here. Simply here! By clicking the connect to download and install Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin, you could begin to get guide for your own. Be the first proprietor of this soft file book Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin Make difference for the others and get the first to advance for Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, By Geoff Colvin Here and now!

Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin

As technology races ahead, what will people do better than computers?

What hope will there be for us when computers can drive cars better than humans, predict Supreme Court decisions better than legal experts, identify faces, scurry helpfully around offices and factories, even perform some surgeries, all faster, more reliably, and less expensively than people?

It’s easy to imagine a nightmare scenario in which computers simply take over most of the tasks that people now get paid to do. While we’ll still need high-level decision makers and computer developers, those tasks won’t keep most working-age people employed or allow their living standard to rise. The unavoidable question—will millions of people lose out, unable to best the machine?—is increasingly dominating business, education, economics, and policy.

The bestselling author of Talent Is Overrated explains how the skills the economy values are changing in historic ways. The abilities that will prove most essential to our success are no longer the technical, classroom-taught left-brain skills that economic advances have demanded from workers in the past. Instead, our greatest advantage lies in what we humans are most powerfully driven to do for and with one another, arising from our deepest, most essentially human abilities—empathy, creativity, social sensitivity, storytelling, humor, building relationships, and expressing ourselves with greater power than logic can ever achieve. This is how we create durable value that is not easily replicated by technology—because we’re hardwired to want it from humans.

These high-value skills create tremendous competitive advantage—more devoted customers, stronger cultures, breakthrough ideas, and more effective teams. And while many of us regard these abilities as innate traits—“he’s a real people person,” “she’s naturally creative”—it turns out they can all be developed. They’re already being developed in a range of far-sighted organizations, such as:

• the Cleveland Clinic, which emphasizes empathy training of doctors and all employees to improve patient outcomes and lower medical costs;
• the U.S. Army, which has revolutionized its training to focus on human interaction, leading to stronger teams and greater success in real-world missions;
• Stanford Business School, which has overhauled its curriculum to teach interpersonal skills through human-to-human experiences.

As technology advances, we shouldn’t focus on beating computers at what they do—we’ll lose that contest. Instead, we must develop our most essential human abilities and teach our kids to value not just technology but also the richness of interpersonal experience. They will be the most valuable people in our world because of it. Colvin proves that to a far greater degree than most of us ever imagined, we already have what it takes to be great.

  • Sales Rank: #94440 in Books
  • Published on: 2015-08-04
  • Released on: 2015-08-04
  • Original language: English
  • Number of items: 1
  • Dimensions: 9.31" h x 1.00" w x 6.38" l, 1.00 pounds
  • Binding: Hardcover
  • 256 pages

Review

“Beautifully written and deeply researched, Humans Are Underrated is one of the most creative and insightful leadership books I have ever read. It is a triumph!”
—DORIS KEARNS GOODWIN, Pulitzer Prize–winning historian

“A powerful exposition of the strengths and limitations of technology in shaping our lives and addressing today’s greatest challenges. More than ever, as Colvin demonstrates, we need people who embody the most human of qualities. An uplifting account of the enduring potential of humanity itself.”
—PAUL POLMAN, CEO, Unilever

“As machines inexorably become ever more competent at doing machinelike things, interpersonal skills, irreplaceable skills of human interaction, will come to be recognized as being even more valuable than they’ve always been. This is an extremely important, highly practical, and indeed exhilarating book.”
—SIR MARTIN SORRELL, CEO, WPP

“Through a series of practical case studies and insights, Colvin clearly demonstrates that—regardless of where the future takes us—emotional intelligence will remain one of the most valuable human skills and the Human Element will remain a differentiator.”
—ANDREW N. LIVERIS, chairman and CEO, Dow Chemical Company

“Geoff Colvin’s fresh take on how to respond to the rise of brilliant machines and the changing nature of work is as wise as it is inspiring.”
—DOMINIC BARTON, global managing director, McKinsey & Company

“Corporate leaders often say, ‘People come first.’ True innovation is realized only when their actions match their words.”
—ROBERT GREIFELD, CEO, Nasdaq

About the Author
GEOFF COLVIN, Fortune’s senior editor at large, is one of America’s most respected journalists. He lectures widely and is the regular lead moderator for the Fortune Global Forum. He also appears daily on the CBS Radio Network, reaching seven million listeners each week. His previous book, Talent is Overrated, was a national bestseller and has been translated into a dozen languages.

Excerpt. © Reprinted by permission. All rights reserved.

CHAPTER ONE

COMPUTERS ARE IMPROVING FASTER THAN YOU ARE

As Technology Becomes More Awesomely Able, What Will Be the High-Value Human Skills of Tomorrow?

I am standing on a stage, behind a waist-high podium with my first name on it. To my right is a woman named Vicki; she’s behind an identical podium with her name on it. Between us is a third podium with no one behind it, just the name “Watson” on the front. We are about to play Jeopardy!

This is the National Retail Federation’s mammoth annual conference at New York City’s Javits Center, and in addition to doing some onstage moderating, I have insanely agreed to compete against IBM’s Watson, the cognitive computing system, whose power the company wants to demonstrate to the 1,200 global retail leaders sitting in front of me. Watson’s celebrated defeat of Jeopardy!’s two greatest champions is almost a year old, so I’m not expecting this to go well. But I’m not prepared for what hits me.

We get to a category called “Before and After at the Movies.” Jeopardy! aficionados have seen this category many times over the years, but I have never heard of it. First clue, for $200: “Han Solo meets up with Lando Calrissian while time traveling with Marty McFly.”

Umm . . . what?

Watson has already buzzed in. “What is The Empire Strikes Back to the Future?” it responds correctly.

It picks the same category for $400: “James Bond fights the Soviets while trying to romance Ali MacGraw before she dies.” I’m still struggling with the concept, but Watson has already buzzed in. “What is From Russia with Love Story?” Right again.

By the time I figure this out, Watson is on the category’s last clue: “John Belushi & the boys set up their fraternity in the museum where crazy Vincent Price turns people into figurines.” The correct response, as Watson instantly knows, is “What is Animal House of Wax?” Watson has run the category.

My humiliation is not totally unrelieved. I do get some questions right in other categories, and Watson gets some wrong. But at the end of our one round I have been shellacked. I actually don’t remember the score, which must be how the psyche protects itself. I just know for sure that I have witnessed something profound.

Realize that Watson is not connected to the Internet. It’s a freestanding machine just like me, relying only on what it knows. It has been loaded with the entire contents of Wikipedia, for example, and much, much more. No one types the clues into Watson; it has to hear and understand the emcee’s spoken words, just as I do. In addition, Watson is intentionally slowed down by a built-in delay when buzzing in to answer a clue. We humans must use our prehistoric muscle systems to push a button that closes a circuit and sounds the buzzer. Watson could do it at light speed with an electronic signal, so the developers interposed a delay to level the playing field. Otherwise I’d never have a prayer of winning, even if we both knew the correct response. But, of course, even with the delay, I lost.

So let’s confront reality: Watson is smarter than I am. In fact, I’m surrounded by technology that’s better than I am at sophisticated tasks. Google’s autonomous car is a better driver than I am. The company has a whole fleet of vehicles that have driven hundreds of thousands of miles with only one accident while in autonomous mode, when one of the cars was rear-ended by a human driver at a stoplight. Computers are better than humans at screening documents for relevance in the discovery phase of litigation, an activity for which young lawyers used to bill at an impressive hourly rate. Computers are better at detecting some kinds of human emotion, despite our million years of evolution that was supposed to make us razor sharp at that skill.

One more thing. I competed against Watson in early 2012. Back then it was the size of a bedroom. As I write, it has shrunk to the size of three stacked pizza boxes, yet it’s also 2,400 percent faster.

More broadly, information technology is doubling in power roughly every two years. I am not—and I’ll guess that you’re not either.

A NIGHTMARE FUTURE?

The mind-bending progress of information technology makes it easier every day for us to imagine a nightmare future. Computers become so capable that they’re simply better at doing thousands of tasks that people now get paid to do. Sure, we’ll still need people to make high-level decisions and to develop even smarter computers, but we won’t need enough such workers to keep the broad mass of working-age people employed, or for their living standard to rise. And so, in the imaginary nightmare future, millions of people will lose out, unable finally to best the machine, struggling hopelessly to live the lives they thought they had earned.

In fact, as we shall see, substantial evidence suggests that technology advances really are playing a role in increasingly stubborn unemployment, slow wage growth, and the trend of college graduates taking jobs that don’t require a bachelor’s degree. If technology is actually a significant cause of those trends, then the miserable outlook becomes hard to dismiss.

But that nightmare future is not inevitable. Some people have suffered as technology has taken away their jobs, and more will do so. But we don’t need to suffer. The essential reality to grasp, larger than we may realize, is that the very nature of work is changing, and the skills that the economy values are changing. We’ve been through these historic shifts a few times before, most famously in the Industrial Revolution. Each time, those who didn’t recognize the shift, or refused to accept it, got left behind. But those who embraced it gained at least the chance to lead far better lives. That’s happening this time as well.

While we’ve seen the general phenomenon before, the way that work changes is different every time, and this time the changes are greater than ever. The skills that will prove most valuable are no longer the technical, classroom-taught, left-brain skills that economic advances have demanded from workers over the past 300 years. Those skills will remain vitally important, but important isn’t the same as valuable; they are becoming commoditized and thus a diminishing source of competitive advantage. The new high-value skills are instead part of our deepest nature, the abilities that literally define us as humans: sensing the thoughts and feelings of others, working productively in groups, building relationships, solving problems together, expressing ourselves with greater power than logic can ever achieve. These are fundamentally different types of skills than those the economy has valued most highly in the past. And unlike some previous revolutions in what the economy values, this one holds the promise of making our work lives not only rewarding financially, but also richer and more satisfying emotionally.

Step one in reaching that future is to think about it in a new way. We shouldn’t focus on beating computers at what they do. We’ll lose that contest. Nor should we even follow the inviting path of trying to divine what computers inherently cannot do—because they can do more every day.

The relentless advance of computer capability is of course merely Moore’s Law at work, as it has been for decades. Still, it’s hard for us to appreciate all the implications of this simple trend. That’s because most things in our world slow down as they get bigger and older; for evidence, just look in the mirror. It’s the same with other living things, singly or in groups. From protozoa to whales, everything eventually stops growing. So do organizations. A small start-up company can easily grow 100 percent a year, but a major Fortune 500 firm may struggle to grow 5 percent.

Technology isn’t constrained that way. It just keeps getting more powerful. Sony’s first transistor radio was advertised as pocket-sized, but it was actually too big, so the company had salesmen’s shirts specially made with extra-large pockets; that radio had five transistors. Intel’s latest chip, the size of your thumbnail, has five billion transistors, and its replacement will have ten billion. Today’s infotech systems, having become as awesomely powerful as they are, will be 100 percent more awesomely powerful in two years. Moore’s law must end eventually, but new technologies in development could be just as effective, and better algorithms are already multiplying computing power in some cases even more than hardware improvements are doing. To imagine that technology won’t keep advancing at a blistering pace seems unwise.

Consider what is being doubled. It isn’t just year-before-last’s achievement in computing power. What gets doubled every two years is everything that has been achieved in the history of computing power up to that point. Back when that progression meant going from five transistors in a device to ten, it didn’t much change the world. Now that it means going from five billion transistors on a tiny chip to ten billion to twenty billion to forty billion—that’s three doublings, just six years—it means literally more than we can imagine.

That’s because it’s so unlike everything else in our world in ways even beyond physical growth rates. For us humans, learning, like growing, gets harder with time. When humans learn to do something, we make slow progress at first—learning how to hold the golf club or turn the steering wheel smoothly—then rapid progress as we get the hang of it, and then our advancement slows down. Pretty soon, most of us are as good as we’re going to get. We can certainly keep improving through devoted practice, but each advance is typically a bit smaller than the one before.

Information technology is just the opposite. When a doubling of computing power for a given price meant going from five transistors to ten, it made a device smarter by only five transistors. Now, after many doublings, the current doubling will make a device smarter by five billion transistors, and the next one will make a device smarter by ten billion.

While people get more skilled by ever smaller increments, computers get more capable by ever larger ones.

The issue is clear and momentous. As technology becomes more capable, advancing inexorably by ever longer two-year strides and acquiring abilities that are increasingly complex and difficult, what will be the high-value human skills of tomorrow—the jobs that will pay well for us and our kids, the competencies that will distinguish winning companies, the traits of dominant nations? To put it starkly: What will people do better than computers?

CHAPTER TWO

GAUGING THE CHALLENGE

A Growing Army of Experts Wonder If Just Maybe the Luddites Aren’t Wrong Anymore.

In the movie Desk Set, a 1957 romantic comedy starring Katharine Hepburn and Spencer Tracy, Hepburn plays the head of the research department at a major TV network. Today a TV network’s research department focuses entirely on audience research, but back then it was a general information resource for anyone at the company, and yes, networks and other companies really had such departments. Equipped with two floors of reference works and other books, its staff stood ready to supply any information that any employee might ask for—the opening lines of Hiawatha, the weight of the earth, the names of Santa’s reindeer (all of which were queries for Hepburn’s department in the movie). That is, employees could pick up the phone, call Katharine Hepburn’s character, and ask in their own words for any information, and she and her staff would search a vast trove of data and return an answer far faster than the caller could ever have found it.

Hepburn’s character is named Miss Watson.

All is well until one day the network boss decides to install a computer—an “electronic brain,” they call it—named EMERAC (a clear reference to ENIAC and UNIVAC, the wonder machines of the era). It was invented by the Spencer Tracy character. Shortly before Miss Watson hears the news that EMERAC is coming to her department, she sees it demonstrated elsewhere, translating Russian into Chinese, among other feats. Her assessment, as expressed to her coworkers: “Frightening. Gave me the feeling that maybe, just maybe, people were a little bit outmoded.”

The Tracy character, Richard Sumner, shows up to install the machine, and Miss Watson and her staff assume they’ll be fired once it’s up and running. In a memorable scene, he demonstrates the machine to a group of network executives and explains its advantages:

Sumner: “The purpose of this machine, of course, is to free the worker—”

Miss Watson: “You can say that again.”

Sumner: “—to free the worker from the routine and repetitive tasks and liberate his time for more important work.”

Miss Watson and the rest of the research staff are indeed fired, but before they can clean out their desks, EMERAC botches some requests it can’t handle—a call for information on Corfu, for example, returns reams of useless data on the word “curfew,” while a staffer scurries into the stacks and gets the needed answers the old-fashioned way. And then it turns out that the researchers actually should not have received termination notices after all. An EMERAC computer in the payroll department had gone haywire and fired everyone in the company. The error is corrected, the research staffers keep their jobs and learn to work with EMERAC, Miss Watson wisely decides to marry the Spencer Tracy character and not the Gig Young character (part of the mandatory romantic subplot), and once again all is well.

Desk Set is extraordinarily prescient about some future capabilities and uses of computers, and also faithful to the fearful popular sentiment about them. Of course, Miss Watson is exactly the human predecessor of today’s Watson cognitive computing system. (Are the names a coincidence? The film’s opening credits include this intriguing one: “The filmmakers gratefully acknowledge the cooperation and assistance of the International Business Machines Corporation.” IBM’s founder was Thomas J. Watson, namesake of today’s Watson computing system, and his son was CEO at the time of the film.) EMERAC as explained by Sumner in the film is remarkably similar to today’s Watson: All the information in all those books in the research library—encyclopedias, atlases, Shakespeare’s plays—was fed into the machine, which could then respond instantly to natural-language requests (typed, not spoken) for information. Even in 1957 the idea was clear; the technology just wasn’t ready.

The research staffers’ fears about being replaced by a computer were also a sign of things to come. “I hear thousands of people are losing their jobs to these electronic brains,” one of them says. She heard right, and the thousands would become millions. At the same time, the corporate response intended to calm those fears has remained just what Sumner said in the movie—that computers would “free the worker from the routine and repetitive tasks” so he or she could do “more important work.” To this day it’s striking how everyone working on advanced information technology seems to feel defensive about the implicit threat of eliminating jobs and takes pains to say that they’re not trying to replace people. “We’re not intending to replace humans,” said Kirstin Petersen of Harvard’s Wyss Institute for Biologically Inspired Engineering, in explaining the institute’s development of “swarm robotics,” in which large numbers of small, simple robots do construction jobs. “We’re intending to work in situations where humans can’t work or it’s impractical for them to work.” IBM has always said that Watson is intended to supplement human decision making, not replace it—“to make people more intelligent about what they do.”

Most important, and perhaps surprising, is that even the film’s happily-ever-after ending was realistic in the large sense, at least with regard to employment, if not romance. Viewed on the scale of the entire economy, technology’s advance indeed has not cost jobs, despite the widespread fears. Quite the opposite. And those fears are much more deeply rooted than most of us realize.

THE NEW SKEPTICS

The conventional view is that fear of technology arose when technology started upending the economic order in the eighteenth century, at the start of the Industrial Revolution in Britain. But the fears were already well entrenched, and innovators were already sounding remarkably modern in arguing that technology was a boon, not a bane, for workers. In the late sixteenth century, an English clergyman named William Lee invented a machine for knitting stockings—a wonderful advance, he believed, because it would liberate hand knitters from their drudgery. When he demonstrated it to Queen Elizabeth I in 1590 or so and asked for a patent, she reportedly replied, “Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring them to ruin by depriving them of employment, thus making them beggars.” After the royal slap down, the queen denied his patent, the hosiers’ guild campaigned against him, and he was forced to move to France, where he died in poverty.

Some 150 years later, in the early dawn of the Industrial Revolution, an Englishman named John Kay revolutionized weaving by inventing the flying shuttle, which doubled productivity—surely a boon for weavers, who could now make twice as much cloth. Yet weavers campaigned against him, manufacturers conspired to violate his patents, and he was forced to move to France, where he died in poverty just like William Lee. Dying destitute in France seemed to be an occupational hazard for innovators.

By the time the Industrial Revolution got going, the pattern was well established. People hated technology that improved productivity. Luddites, smashing power looms in the early nineteenth century, were only the most famous exemplars.

These protesters were right in the short run, but in the long run they were resoundingly wrong. New technology does destroy jobs, but it also creates new ones—jobs for people who operate the stocking frames and power looms, for example. More important, better technology creates better jobs. Workers using improved technology are more productive, so they earn more—and spend more, creating more new jobs across the economy. At the same time, the products those tech-enabled workers make cost less than before; machine-made cloth costs a fraction of what the handmade version costs. The result is that technology, over time and across economies, has raised living standards spectacularly. For centuries, the fears of Luddites past and present have been not merely unfounded but the exact opposite of reality. Advancing technology has improved the material well-being of humanity more than any other development in history, by far.

Now something has changed. The way technology benefits workers is one of the firmest orthodoxies in all of economics, but recently, for the first time, many mainstream economists and technologists have begun to question whether it will continue.

The proximate cause of their new skepticism is the sorry job-generating performance of the developed economies in the wake of the 2008–2009 financial crisis and recession. For decades, the U.S. economy regularly returned to prerecession employment levels about eighteen months after a recession started. Then, starting with the 1990–1991 recession, the lag started lengthening. After the 2008–2009 recession the recovery of employment took seventy-seven months—over six years. How come? And why did wages begin stagnating for large swaths of the U.S. workforce long before the recession began? Why is the same trend happening in other developed countries? As economists look for answers, they see factors that go far beyond the causes of the recession.

“THE DEFINING ECONOMIC FEATURE OF OUR ERA”

Lawrence H. Summers—former U.S. treasury secretary, former president of Harvard University, a star economist—is one of the new skeptics. In a significant lecture to an audience of fellow economists, he summarized in his brisk way the orthodox view of the debate over technology: “There were the stupid Luddite people, who mostly were outside of economics departments, and there were the smart progressive people. . . . The stupid people thought that automation was going to make all the jobs go away and there wasn’t going to be any work to do. And the smart people understood that when more was produced, there would be more income and therefore there would be more demand. It wasn’t possible that all the jobs would go away, so automation was a blessing.”

Evidence overwhelmingly supported that view for decades. All you had to do was imagine the world of 1800 and compare it with the world around you. But then, quite recently, the world changed: “Until a few years ago, I didn’t think this was a very complicated subject,” Summers said. “The Luddites were wrong and the believers in technology and technological progress were right. I’m not so completely certain now.”

Summers is far from the only expert who became doubtful. The Pew Research Center Internet Project in 2014 canvassed 1,896 experts it had identified as insightful on technology issues, and it asked them this question: Will technology displace more jobs than it creates by 2025? Half said yes, and half said no. That was an astounding result. As Summers explained, the evidence in favor of “no” was perfectly clear, or it had been. It’s hard to imagine that, ten years before, as many as 10 percent of such a highly informed group would have said yes. (We don’t know for sure because apparently no one thought the question even worth asking.) Now half said so. The orthodoxy was suddenly no longer orthodox.

What Summers and other economists believe has changed is, in concept, simple. The two factors of production are capital and labor, and in economists’ terms they have always been regarded as complements, not just substitutes. Capital makes workers more productive. Even if it displaces some workers (substitutes for them), it also creates new, more productive jobs using that new capital so that, as Summers said, “if there’s more capital, the wage has to rise” (it complements workers). But now he and others began seeing a new possibility: Capital can substitute for labor, period. Summers explained, “That is, you can take some of the stock of machines and, by designing them appropriately, you can have them do exactly what labor did before.”

The key word is “exactly.” A Google self-driving car doesn’t complement anybody’s work because nobody operates it at all. The company produced a version that doesn’t have a steering wheel, brake pedal, or accelerator, and it’s designed to transport even blind or other disabled people. So it doesn’t make drivers, even a shrunken population of them, more productive. It does exactly what they do and thus just replaces them.

In a world like that, economic logic dictates that wage rates must fall, and the share of total income going to capital rather than labor must rise, which is indeed what has been happening. An important reason, Summers says, is “the nature of the technical changes that we have seen: Increasingly they take the form of capital that effectively substitutes for labor.”

The outlook is obviously for much more capital-labor substitution as computing power gallops unflaggingly forward. That is not a happy future for many people. In fact, as Summers reasons, “It may well be that, given the possibilities for substitution, some categories of labor will not be able to earn a subsistence income.”

Economists aren’t the only experts who see such a trend. “Unlike previous disruptions, such as when farming machinery displaced farm workers but created factory jobs making the machines, robotics and AI [artificial intelligence] are different,” Mark Nall, a NASA program manager with much real-world technology experience, told the Pew canvassers. “Due to their versatility and growing capabilities, not just a few economic sectors will be affected, but whole swaths will be. . . . The social consequence is that good-paying jobs will be increasingly scarce.” Stowe Boyd, lead researcher at Gigaom Research, a technology research firm, was even more pessimistic: “An increasing proportion of the world’s population will be outside the world of work—either living on the dole, or benefiting from the dramatically decreased costs of goods to eke out a subsistence lifestyle.” Michael Roberts, a much respected Internet pioneer, predicted confidently that “electronic human avatars with substantial work capability are years, not decades away. . . . There is great pain down the road for everyone as new realities are addressed. The only question is how soon.”

Microsoft founder Bill Gates has observed the trend also and believes it’s greatly underappreciated: “Software substitution, whether it’s for drivers or waiters or nurses—it’s progressing,” he told a Washington, D.C., audience in 2014. “Technology over time will reduce demand for jobs. . . . Twenty years from now, labor demand for lots of skill sets will be substantially lower. I don’t think people have that in their mental model.”

But isn’t all this garment rending and teeth gnashing just the usual worry over the endless cycle of creative destruction, as new industries displace old ones? You can’t earn a subsistence income with the skills of making slide rules, and that’s not a problem because you can earn a better income doing something else. But the analogy isn’t valid. You can’t earn a living by making slide rules because nobody wants them anymore. This new argument, by contrast, holds that the economy can increasingly provide exactly the goods and services that people most want today and tomorrow, and can do it using more machines and ever fewer people.

Thus Summers’s conclusion, which is significant coming from an economist of his stature: “This set of developments is going to be the defining economic feature of our era.”

THE FOURTH GREAT TURNING POINT FOR WORKERS

The immediate question for most of us is obvious: Who, specifically, gets hurt, and who doesn’t?

To find the answer, it helps to think of these developments as the latest chapter in a story. Technology has been changing the nature of work and the value of particular skills for well over 200 years, and the story so far comprises just three major turning points.

At first, the rise of industrial technology devalued the skills of artisans, who handcrafted their products from beginning to end: A gun maker carved the stock, cast the barrel, engraved the lock, filed the trigger, and painstakingly fitted the pieces together. But in Eli Whitney’s Connecticut gun factory, separate workers did each of those jobs, or just portions of them, using water-powered machinery, and components of each type were identical. Skilled artisans were out of luck, but less skilled workers were in demand. They could easily learn to use the new machines—the workers and machines were complements—and so the workers could earn far more than before.

The second turning point arrived in the early twentieth century, when a new trend emerged. Widely available electricity enabled the building of far more sophisticated factories, requiring better educated, more highly skilled workers to operate the more complicated machines; companies also grew much larger, requiring a larger corps of educated managers. Now the unskilled were out of luck, and educated workers were in demand—but that was okay, because the unskilled could get educated. The trend intensified through most of the twentieth century. Advancing technology continually required better educated workers, and Americans responded by educating themselves with unprecedented ambition. The high school graduation rate rocketed from 4 percent in 1890 to 77 percent in 1970, a national intellectual upgrade such as the world had never seen. As long as workers could keep up with the increasing demands of technology, the two remained complements. The result was an economic miracle of fast-rising living standards.

But then the third major turning point arrived, starting in the 1980s. Information technology had developed to a point where it could take over many medium-skilled jobs—bookkeeping, back-office jobs, repetitive factory work. The number of jobs in those categories diminished, and wages stagnated for the shrinking group of workers who still did them. Yet the trend was limited. At both ends of the skill spectrum, people in high-skill jobs and low-skill service jobs did much better. The number of jobs in those categories increased, and pay went up. Economists called it the polarization of the labor market, and they observed it in the United States and many other developed countries. At the top end of the market, infotech still wasn’t good enough to take over the problem-solving, judging, and coordinating tasks of high-skill workers like managers, lawyers, consultants, and financiers; in fact, it made those workers more productive by giving them more information at lower cost. At the bottom end, infotech didn’t threaten low-skill service workers because computers were terrible at skills of physical dexterity; a computer could defeat a grand master chess champion but couldn’t pick up a pencil from a tabletop. Home health aides, gardeners, cooks, and others could breathe easy.

That was the story into the 2000s. In the nonstop valuing and devaluing of skills through economic history, infotech was crushing medium-skill workers, but workers at the two ends of the skill spectrum were safe or prospering. Now we are at a fourth turning point. Infotech is advancing steadily into both ends of the spectrum, threatening workers who thought they didn’t have to worry.

MAYBE EVEN LAWYERS CAN’T OUTSMART COMPUTERS

At the top end, what’s happening to lawyers is a model for any occupation involving analysis, subtle interpretation, strategizing, and persuasion. The computer incursion into the legal discovery process is well known. In cases around the world computers are reading millions of documents and sorting them for relevance without ever getting tired or distracted. The cost savings are extraordinary. One e-discovery vendor, Symantec’s Clearwell, claimed it could cut costs up to 98 percent. That may seem outlandish, but it’s in line with the claims of an executive at another vendor, Autonomy, who told the New York Times that e-discovery would enable one lawyer to do the work of 500 or more. In addition, software does the job much, much better than people. It can detect patterns in thousands or millions of documents that no human could spot—unusual editing of a document, for example, or spikes in communication between certain people, or even changes in e-mail style that may signal hidden motives.

But that’s just the beginning. Computers then started moving up the ladder of value, becoming highly skilled at searching the legal literature for appropriate precedents in a given case, and doing it far more widely and thoroughly than people can do. Humans still have to identify the legal issues involved, but, as Northwestern University law professor John O. McGinnis has written, “search engines will eventually do this by themselves, and then go on to suggest the case law that is likely to prove relevant to the matter.”

Advancing even higher into the realm of lawyerly skill, computers can already predict Supreme Court decisions better than legal experts can. As such analytical power expands in scope, computers will move nearer the heart of what lawyers do by advising better than lawyers can on whether to sue or settle or go to trial before any court and in any type of case. Companies such as Lex Machina and Huron Legal already offer such analytic services, which are improving by the day. These firms’ computers have read all the documents in hundreds of thousands of cases and can tell you, for example, which companies are more likely to settle than to litigate a patent case, or how particular judges tend to rule in particular types of cases, or which lawyers have the best records in front of specified judges. As more potential litigants, both plaintiffs and defendants, can see better analysis of vastly more data, odds are strong they’ll be able to resolve disputes far more efficiently. One possible result: fewer lawsuits.

None of this means that lawyers will disappear, but it suggests that the world will need fewer of them. It’s already happening. “The rise of machine intelligence is probably partly to blame for the current crisis of law schools”—shrinking enrollments, falling tuitions—“and will certainly worsen that crisis,” McGinnis has observed.

With infotech thoroughly disrupting even a field so advanced that it requires three years of post-graduate education and can pay extremely well, other high-skill workers—analysts, managers—can’t help but wonder about their own futures. What’s happening in law is the application of Watson-like technology to a specific industry, but it can be applied far more widely. The breakthrough of this technology is that it understands natural language, so when you ask it a question, it doesn’t just search for keywords from the question you asked. It tries to figure out the context of your question and thus understand what you really mean. So, for example, if your question includes the phrase “two plus two,” that might mean “four,” or, if you’re in the car business, it might mean “a car with two front seats and two back seats,” or if you’re a psychologist, it might mean “a family with two parents and two children.” Cognitive computing systems derive the context and then come up with possible answers to your question and estimate which one is most likely correct. A system’s answers aren’t especially good when it first delves into a field, but with experience it keeps getting better. That’s why the Internet entrepreneur Terry Jones, who founded Travelocity, has said that “Watson is the only computer that’s worth more used than new.”

Watson-like technology works best when it has a really big body of written material to read and work with. For Jeopardy! Watson downloaded not only the entire contents of Wikipedia but also thousands of previous Jeopardy! clues and responses. Law is obviously an excellent field for this technology. Medicine is another. Memorial Sloan Kettering Cancer Center in New York City uses Watson to extract answers from the vast oncology literature, a task which no physician could ever keep up with. Financial advice looks like a fat target for this technology because it involves a vast and growing corpus of research plus huge volumes of data that change every day. Several financial institutions are therefore using Watson, initially as a tool for their financial advisers. But look just a bit down the road: Corporate Insight, a research firm that focuses on the financial services industry, asks, “Once consumers have a personal Watson in their pocket . . . why would an experienced investor need a financial adviser?”

WRITERS WHO NEVER GET BLOCKED, TIRED, OR DRUNK

Combine an understanding of natural language with high-torque analytic power and you get a nonfiction writer, or at least a species of one. A company called Narrative Science makes software that writes articles that would not strike most people as computer-written. It focused first on events embodying lots of data: ball games and corporate earnings announcements. The software became increasingly sophisticated at going beyond the facts and figures—for example, figuring out the most important play in a game or identifying the best angle for the article: a come-from-behind win, say, or a hero player. Then the developers taught the software different writing styles, which customers could choose from a menu. Next, it learned to understand more than just numerical data, reading relevant material to create context for the article. A number of media companies, including Yahoo and Forbes, publish articles from Narrative Science, though some of the company’s customers don’t want to be identified and don’t tell readers which articles are computer-written. In mid-2014, the Associated Press assigned computers to write all its articles about corporate earnings announcements.

Then Narrative Science realized that maybe the real money wasn’t in producing journalism at all (they could have asked any journalist about that) but in generating the writing that companies use internally, the countless reports and analyses that influence business decisions. So it arranged its technology to gather broad classes of data, including unstructured data like social media posts, on any given topic or problem, and to analyze it deeply, looking for trends, correlations, unusual events, and more. The software uses that data to “make judgments and draw conclusions,” the company says; it can also make recommendations. The software writes it all up at a reading level and in a tone that the customer chooses, also supplying helpful charts and graphs.

This is starting to sound less like writing and more like management.

But are the writing and the analysis any good? That, at least, is for humans to decide. Except that increasingly it need not be. Schools from the elementary level through college are using software to judge writing and analysis in the form of student essays. The software isn’t perfect—it doesn’t yet evaluate such subtleties as voice and tone—but human graders aren’t perfect either. Jeff Pence, a middle-school teacher in Canton, Georgia, who used the software to help grade papers from his 140 students, acknowledged that it doesn’t grade with perfect accuracy, but, he told Education Week, “When I reach that 67th essay, I’m not real accurate.” Similar software is being used at much higher levels. EdX, the enterprise started by Harvard and MIT to offer online courses, has begun using it to grade student papers. The Hewlett Foundation offered two $100,000 prizes for developing such software, and edX hired one of the winners to work on its version, which is available to developers everywhere as open-source code so it can be improved.

Of course, this evaluation software must itself be evaluated by humans, measuring it against the performance of humans. So researchers had a group of human teachers grade a large set of essays. Then they gave those same essays to a separate group of human graders and to the software. They compared the grades assigned by Human Group Two to those assigned by Human Group One, and they also compared the grades assigned by the software to those assigned by Human Group One. All three sets of grades were different, but the software’s grades were no more different from Human Group One’s grades than Human Group Two’s grades were different from Human Group One’s. So while software doesn’t assign the same grades as people, neither do people assign the same grades as other people. And if you look at a large group of grades assigned to the same work by people and by software , you can’t tell which is which.

Two points to draw from this:

One, the software is getting rapidly better. The people are not.

Two, education as currently conceived is becoming really weird. After all, the report-writing software developed by Narrative Science and other companies is easily adapted to other markets, such as students’ papers. So we now have essay-grading software and essay-writing software, both of which are improving. What happens next is obvious. The writing software gets optimized to please the grading software. Every essay gets an A, and neither the student nor the teacher has anything to do with it. But not much education is necessarily going on, which poses problems for both student and teacher.

A ROBOT’S TOUCH

The rapid progress of infotech in taking over tasks at the high-skill end of the job spectrum—lawyers, doctors, managers, professors—is startling, but it isn’t especially surprising. If we thought such jobs were by their nature immune to computer competition, we shouldn’t have, because these jobs are highly cognitive. Much of the work is brain work, and that’s just what computers do best; they needed only time to accumulate the required computing power. The greater surprise shows up at the opposite end of the job spectrum, in the low-skill, low-pay world where the work is less cognitive and more physical. This is the kind of work that computers for decades could hardly do at all. An example illustrates the gap in abilities: In 1997 a computer could beat the world’s greatest chess player yet could not physically move the pieces on the board. But again the technology needed only time, a few more doublings of power. The skills of physical work are also not immune to the advance of infotech.

Google’s autonomous cars are an obvious and significant example—significant because the number one job among American men is truck driver. Many more examples are appearing. You can train a Baxter robot (from Rethink Robotics) to do all kinds of things—pack or unpack boxes, take items to or from a conveyor belt, fold a T-shirt, carry things around, count them, inspect them—just by moving its arms and hands (“end-effectors”) in the desired way. Many previous industrial robots had to be surrounded by safety cages because they could do just one thing in one way, over and over, and that’s all they knew; if you got between a welding robot and the piece it was welding, you were in deep trouble. But Baxter doesn’t hurt anyone as it hums about the shop floor; it adapts its movements to its environment by sensing everything around it, including people.

Many similar kinds of robots operate in different environments—for example, buzzing through hospital hallways delivering medicines, hauling laundry, or picking up infectious waste. Security robots can hang out around public buildings, watching, listening, reading license plates, and sending information to law enforcement as the robot deems appropriate. Robots went into the wreckage of Japan’s ruined Fukushima Daiichi nuclear power plant long before people did.

The advantage robots hold in doing dangerous work is a big reason the U.S. military is a major user of them and a major funder of research into them. By 2008 about 12,000 combat robots were working in Iraq. Some, barely larger than a shoebox, run on miniature tank treads and can carry a camera and other sensors; they gather intelligence and do surveillance and reconnaissance. Larger ones dispose of bombs or carry heavy loads into and out of dangerous places. A few robots armed with guns were sent to Iraq but reportedly were never used. Nonetheless, General Robert Cone announced in 2014 that the army was considering shrinking the standard brigade combat team from 4,000 soldiers to 3,000, making up the difference with robots and drones.

So far virtually none of those robots are autonomous; a person controls each one. But the army realized this model was inefficient, so the U.S. Army Research Laboratory developed a more sophisticated robot called RoboLeader that, in the words of project chief Jessie Chen, “interprets current situations in terms of an operator’s objective”—it looks, listens, senses, and determines how best to carry out its orders—“and issues detailed command signals to a team of lower capability robots.” The great advantage, as Chen explains, is that “instead of directly managing each individual robot, the human operator only deals with a single entity—RoboLeader.”

Ladies and gentlemen, we have invented robotic middle management.

Robot physical skills are fast advancing on other dimensions as well. Consider a robotic hand developed by a team from Harvard, Yale, and iRobot, maker of the Roomba vacuum cleaner and many other mobile robots, including many used by the military. So fine are the robotic hand’s motor skills that it can pick up a credit card from a tabletop, put a drill bit in a drill, and turn a key, all of which were previously beyond robotic abilities. “A disabled person could say to a robot with hands, ‘Go to the kitchen and put my dinner in the microwave,’” one of the researchers, Harvard professor Robert Howe, told Harvard magazine. “Robotic hands are the real frontier, and that’s where we’ve been pushing.”

It seems that everywhere we look, computers are suddenly capable of doing things that they couldn’t do and that some people thought they never would do. The less exalted skills, physical ones like folding a T-shirt, turned out to be the more challenging, but at last even they are succumbing to the combination of relentlessly increasing computing power and algorithmic skill. The number of people who wrongly believed they could never be replaced by a computer keeps growing—not slower, but faster.

THE COMPUTER KNOWS YOU’RE LYING

And yet, isn’t there one last redoubt of human uniqueness, some ultimate zone of pulsing, organic personhood into which computers can never enter? Everything we’ve examined so far has involved abilities that originate in the left brain—logical, linear, flowchartable, computer-like. But what about the other side, the right side, and its specialty—emotion? It’s irrational, mysterious, and we all understand it, even though we can’t explain how. In addition, emotion is often the real secret sauce of success in many jobs, high-skill and low. Executives must read and respond to the emotions of customers, employees, regulators, and everyone else they deal with. A good waiter responds differently to customers who are cranky, tired, cheerful, confused, or tipsy, all without quite knowing how. Surely this is forever ours alone.

The founders of companies like Emotient and Affectiva might disagree, however. They’re researchers in the field of affective computing, in which computers understand human emotion. As their work advances, our expert ability to navigate the flesh-and-blood, analog world of human feelings is looking a lot less special every day.

Most helpful customer reviews

24 of 24 people found the following review helpful.
An Outstanding Guide for Future Proofing Yourself in Today's World
By Chuck Bolton
In today’s globally complex, uncertain, disruptive economy, where automation and lower labor costs displace the jobs of tens of thousands, those who are wise and work for a living have been asking the question, “How will I add value in the future?”

In Humans are Underrated, author Geoff Colvin suggests a shift from that question to a better one that may just be the recipe to future proof ourselves: “What are the activities that we humans, driven by our deepest nature or by the realities of daily life, will simply insist be performed by other humans, even if computers could do them?”

Here’s the reality. Humans must work together to set and achieve collective goals. In the corporate world, there are too many constituencies, too much information, too many nuances and subtleties that must be accounted for. The reality is that teams and groups solve problems better than individuals working in isolation. Those who thrive will develop and demonstrate emotional intelligence and within their groups develop collective intelligence.

Our need to work together, to work collaboratively, is baked deep in our DNA. For tens of thousands of years, we’ve told stories, learned from one another and worked together to ensure our survival. In the Information Age, many of us have lost the ability to work effectively with others. Those who thrive in the future will show empathy and master the abilities of working in and leading groups and teams.

His well- researched book is informative and fascinating. Many examples, anecdotes and stories. It provides ample proof we're moving from the Information Age to a new Relationship Age where the ability to engage co-workers and customers with humor, energy and generosity will prevail. It’s not that technical skills are unimportant, but those who are truly valuable are ones who can build relationships, collaborate with others, brainstorm and lead. The good news: we have an invitation to operate with greater humanity – to be the people we were meant to be. Not always easy to do, but an invitation worth accepting if you wish to standout at work in today’s tumultuous environment. Humans are Underrated is an outstanding guide to staying relevant and thriving in the future. Two thumbs up!

29 of 30 people found the following review helpful.
Send in the Robots
By tom abeles
The first 1/4th of this volume is Colvin's sensibility of where artificial intelligence and robotics will evolve. Like other writers in this arena, he reiterates the belief that if one can imagine AI and robotics doing a job in the near future, then it can and probably will be accomplished. Colvin then takes the last 3/4 of the book to describe what we currently believe makes us human and the services we provide together that can never be replaced by AI and robotics. It seems like the discourse by a person who doesn't quite believe his own rhetoric. This narrative creates a strong undercurrent of "maybe" and a sense that if the dialog stops then the mystique melts away and the robots and AI may prevail. When the author asks one of the most prescient thinkers on technology, the founder of MIT's Media Lab what will people do better than computers in 10 years, Negroponte answers "Very little, other than enjoy". That ghost haunts the book, much as Peter Pan's mantra in the eponymous movie when trying to save Tinker Bell, he asks the audience to repeat, "I do believe in fairies".

Colvin's thesis circles the argument that humans have emotional connections that leverage relations in problem solving, provide undefined support in people decisions about each other in business, legal situations and personal matters and leverage creativity. This is done by selective examples, with the author choosing not to deal with issues of humans connecting to robotic pets, engaging with computer personalities like the very earlier psychologic analysis of Elisa or their connection with Apple's Siri. The examples from the military have amply supported the use of AI. The rise of Watson and deep learning algorithms such as drug creating programs discussed by Jeremy Howard in his Ted Talk point clearly that R2D2 and C3PO from Star Wars aren't as far in the mythic,misty, future that Colvin wants not to believe.

Perhaps one of the most important sections buried in the first quarter is Colvin's discussion of the work of Goldin and Katz at Harvard, The Race Between Education and Technology. This work supported by other research points to the fact that the United States' technology dominance was due to its early and deep commitment to educating the public, particularly in support of technology, for many reasons, including military use. What the studies show is that the dominating contribution of education in technology to human welfare reached a tipping point around the year 2000. Other recent data notes that while industry still needs technology savvy, and that many of these areas will still command high salaries, but that the largest demand will be for the more socially savvy. Colvin has taken that core idea as his underlying premise in validating the ability of humans to maintain the head of the animal kingdom whether challenged by AI/robotics made from biologics or composed of silicon, rare metals and plastics. On the other hand, if one believes the writings of Rudy Rucker in his Ware Tetrology or Ramez Naam's Nexus series, there is nothing that prevents the emergence of human like "fleshie" intelligent bots or humans born with imbedded technology.

The author has selectively culled the literature and makes it accessible to a lay public. His first 1/4 is insightful but the last 3/4 is seems to have become technological myopic in an attempt to validate his underlying thesis of which he still seems unconvinced, himself.

10 of 11 people found the following review helpful.
So There is Hope for us Mere Mortals
By Chip Hauss
I only read this book because it was reviewed by Tyler Cowen in the Washington Post, and I like but usually disagree with what he has to say about the economy.

But this time I had to agree with Cowen perhaps because Geoff Colvin is talking about two longer term trends that resonate in my work as a peacebuilder, not someone who focuses on either short or long term economic dynamics.

Colvin is not the first one to draw our attention to the fact that new technologies are making many traditional professional skills less and less valuable. Dozens of jobs--including some things done by highly trained doctors or lawyers or (like me in an earlier life) college professors could well be done better by machines that either exist today or will exist very, very soon.

But his real contribution is to point out that our creative, collaborative, and related skills will become even more important as we face more daunting problems whose causes and consequences are inextricably intertwined. As with the "down" side of the story, Colvin provides us with plenty of thought provoking examples, ranging from the initiatives on narratives and neuroscience led by the Defense Department to the ways women and what Adam Grant calls givers lead.

I expected to read this book quickly. Instead, it took me days because I had to keep thinking about his examples and tracking down his references. I read a lot, and this has to be among the best two or three I've read this year.

See all 33 customer reviews...

Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin PDF
Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin EPub
Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Doc
Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin iBooks
Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin rtf
Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Mobipocket
Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Kindle

@ PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Doc

@ PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Doc

@ PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Doc
@ PDF Ebook Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will, by Geoff Colvin Doc

Tidak ada komentar:

Posting Komentar