Faça download dos Aplicativos de Leitura Kindle Gratuitos e comece a ler eBooks Kindle nos mais populares smartphones, tablets e computadores pessoais. Para enviar o link de download para seu smartphone por SMS, use o formato internacional sem espaços (Código Internacional+DDD+Número. Exemplo: +551199999999)

  • Apple
  • Android
  • Windows Phone
  • Android

Para receber o link de download digite seu celular:

Preço digital sugerido: R$ 84,61
Preço Kindle: R$ 45,53

Economize
R$ 38,77 (46%)

OU

Essas promoções serão aplicadas a este item:

Algumas promoções podem ser combinadas; outras não são elegíveis. Para detalhes, por favor, acesse os Termos e Condições dessas promoções.

Entregar no seu Kindle ou em outro dispositivo

Entregar no seu Kindle ou em outro dispositivo

Anúncio do aplicativo do Kindle

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy eBook Kindle


Ver todos os 3 formatos e edições Ocultar outros formatos e edições
Preço
Novo a partir de Usado a partir de
eBook Kindle
"Tente novamente"
R$ 45,53

Número de páginas: 274 páginas Dicas de vocabulário: Habilitado Configuração de fonte: Habilitado
Page Flip: Habilitado Idioma: Inglês

eBooks na Loja Kindle
eBooks em oferta na Loja Kindle
Todos os dias, novos eBooks com desconto. Vem.

Descrições do Produto

Descrição do produto

Longlisted for the National Book Award
New York Times Bestseller

A former Wall Street quant sounds an alarm on the mathematical models that pervade modern life — and threaten to rip apart our social fabric


We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.

But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.

Tracing the arc of a person’s life, O’Neil exposes the black box models that shape our future, both as individuals and as a society. These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health.

O’Neil calls on modelers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it’s up to us to become more savvy about the models that govern our lives. This important book empowers us to ask the tough questions, uncover the truth, and demand change.

— Longlist for National Book Award (Non-Fiction)
— Goodreads, semi-finalist for the 2016 Goodreads Choice Awards (Science and Technology)
— Kirkus, Best Books of 2016
New York Times, 100 Notable Books of 2016 (Non-Fiction)
The Guardian, Best Books of 2016
— WBUR's "On Point," Best Books of 2016: Staff Picks
— Boston Globe, Best Books of 2016, Non-Fiction

Contracapa

'A manual for the 21st-century citizen ... accessible, refreshingly critical, relevant and urgent' Federica Cocco, Financial Times

'O'Neil's book offers a frightening look at how algorithms are increasingly regulating people ... masterly' Clay Shirky, The New York Times Book Review

'Opens the curtain on algorithms that exploit people and distort the truth while posing as neutral mathematical tools. This book is wise, fierce, and desperately necessary' Jordan Ellenberg, author of How Not to Be Wrong

'A vital crash course in why we must interrogate the systems around us and demand better' Cory Doctorow

'Cathy O'Neil has a story to tell, and it is a story about you ... what she reveals is fascinating' Danny Dorling, Times Higher Education


Detalhes do produto

  • Formato: eBook Kindle
  • Tamanho do arquivo: 2888 KB
  • Número de páginas: 274 páginas
  • Editora: Broadway Books; Edição: Reprint (6 de setembro de 2016)
  • Vendido por: Amazon Servicos de Varejo do Brasil Ltda
  • Idioma: Inglês
  • ASIN: B019B6VCLO
  • Leitura de texto: Habilitado
  • X-Ray:
  • Dicas de vocabulário: Habilitado
  • Leitor de tela: Compatível
  • Configuração de fonte: Habilitado
  • Avaliação média: Seja o primeiro a avaliar este item
  • Lista de mais vendidos da Amazon: #12,427 entre os mais vendidos na Loja Kindle (Conheça os 100 mais vendidos na Loja Kindle)

Avaliação de clientes

Ainda não há avaliações de clientes para este título na Amazon.com.br
5 estrelas
4 estrelas
3 estrelas
2 estrelas
1 estrela

Avaliações mais úteis de consumidores na Amazon.com (beta)

Amazon.com: 4.1 de 5 estrelas 226 avaliações
Esta avaliação foi considerada útil por 214 de 225 pessoa(s):
5.0 de 5 estrelas Stop Using Math as a Weapon 17 de setembro de 2016
Por Amazon Customer - Publicada na Amazon.com
Formato: Capa dura
So here you are on Amazon's web page, reading about Cathy O'Neil's new book, Weapons of Math Destruction. Amazon hopes you buy the book (and so do I, it's great!). But Amazon also hopes it can sell you some other books while you're here. That's why, in a prominent place on the page, you see a section entitled:

Customers Who Bought This Item Also Bought

This section is Amazon's way of using what it knows -- which book you're looking at, and sales data collected across all its customers -- to recommend other books that you might be interested in. It's a very simple, and successful, example of a predictive model: data goes in, some computation happens, a prediction comes out. What makes this a good model? Here are a few things:

1. It uses relevant input data.The goal is to get people to buy books, and the input to the model is what books people buy. You can't expect to get much more relevant than that.
2. It's transparent. You know exactly why the site is showing you these particular books, and if the system recommends a book you didn't expect, you have a pretty good idea why. That means you can make an informed decision about whether or not to trust the recommendation.
3. There's a clear measure of success and an embedded feedback mechanism. Amazon wants to sell books. The model succeeds if people click on the books they're shown, and, ultimately, if they buy more books, both of which are easy to measure. If clicks on or sales of related items go down, Amazon will know, and can investigate and adjust the model accordingly.

Weapons of Math Destruction reviews, in an accessible, non-technical way, what makes models effective -- or not. The emphasis, as you might guess from the title, is on models with problems. The book highlights many important ideas; here are just a few:

1. Models are more than just math. Take a look at Amazon's model above: while there are calculations (simple ones) embedded, it's people who decide what data to use, how to use it, and how to measure success. Math is not a final arbiter, but a tool to express, in a scalable (i.e., computable) way, the values that people explicitly decide to emphasize. Cathy says that "models are opinions expressed in mathematics" (or computer code). She highlights that when we evaluate teachers based on students' test scores, or assess someone's insurability as a driver based on their credit record, we are expressing opinions: that a successful teacher should boost test scores, or that responsible bill-payers are more likely to be responsible drivers.

2. Replacing what you really care about with what you can easily get your hands on can get you in trouble. In Amazon's recommendation model, we want to predict book sales, and we can use book sales as inputs; that's a good thing. But what if you can't directly measure what you're interested in? In the early 1980's, the magazine US News wanted to report on college quality. Unable to measure quality directly, the magazine built a model based on proxies, primarily outward markers of success, like selectivity and alumni giving. Predictably, college administrators, eager to boost their ratings, focused on these markers rather than on education quality itself. For example, to boost selectivity, they encouraged more students, even unqualified ones, to apply. This is an example of gaming the model.

3. Historical data is stuck in the past. Typically, predictive models use past history to predict future behavior. This can be problematic when part of the intention of the model is to break with the past. To take a very simple example, imagine that Cathy is about to publish a sequel to Weapons of Math Destruction. If Amazon uses only purchase data, the Customers Who Bought This Also Bought list would completely miss the connection between the original and the sequel. This means that if we don't want the future to look just like the past, our models need to use more than just history as inputs. A chapter about predictive models in hiring is largely devoted to this idea. A company may think that its past, subjective hiring system overlooks qualified candidates, but if it replaces the HR department with a model that sifts through resumes based only on the records of past hires, it may just be codifying (pun intended) past practice. A related idea is that, in this case, rather than adding objectivity, the model becomes a shield that hides discrimination. This takes us back to Models are more than just math and also leads to the next point:

4. Transparency matters! If a book you didn't expect shows up on The Customers Who Bought This Also Bought list, it's pretty easy for Amazon to check if it really belongs there. The model is pretty easy to understand and audit, which builds confidence and also decreases the likelihood that it gets used to obfuscate. An example of a very different story is the value added model for teachers, which evaluates teachers through their students' standardized test scores. Among its other drawbacks, this model is especially opaque in practice, both because of its complexity and because many implementations are built by outsiders. Models need to be openly assessed for effectiveness, and when teachers receive bad scores without knowing why, or when a single teacher's score fluctuates dramatically from year to year without explanation, it's hard to have any faith in the process.

5. Models don't just measure reality, but sometimes amplify it, or create their own. Put another way, models of human behavior create feedback loops, often becoming self-fulfilling prophecies. There are many examples of this in the book, especially focusing on how models can amplify economic inequality. To take one example, a company in the center of town might notice that workers with longer commutes tend to turn over more frequently, and adjust its hiring model to focus on job candidates who can afford to live in town. This makes it easier for wealthier candidates to find jobs than poorer ones, and perpetuates a cycle of inequality. There are many other examples: predictive policing, prison sentences based on recidivism, e-scores for credit. Cathy talks about a trade-off between efficiency and fairness, and, as you can again guess from the title, argues for fairness as an explicit value in modeling.

Weapons of Math Destruction is not a math book, and it is not investigative journalism. It is short -- you can read it in an afternoon -- and it doesn't have time or space for either detailed data analysis (there are no formulas or graphs) or complete histories of the models she considers. Instead, Cathy sketches out the models quickly, perhaps with an individual anecdote or two thrown in, so she can get to the main point -- getting people, especially non-technical people, used to questioning models. As more and more aspects of our lives fall under the purview of automated data analysis, that's a hugely important undertaking.
Esta avaliação foi considerada útil por 339 de 381 pessoa(s):
3.0 de 5 estrelas "They back up their analysis with reams of statistics, which give them the studied air of evenhanded science." 14 de agosto de 2016
Por CodeMaster Talon - Publicada na Amazon.com
Formato: Capa dura Análise do cliente Vine de produto gratuito ( O que é isso? )
I struggled with the star rating for this book. There are certainly aspects of the work that merit five stars. And it is VERY thought-provoking, like a good book should be. But there are flaws, significant ones, with the biggest flaw being a glaring over-simplification of many of the systems that O'Neil decries in the book. I don't know if O'Neil has personally ever had to take a psychology test to get a job, worked under the Kronos scheduling system, lived in a neighborhood with increased police presence due to crime rates, been victimized by insurance rates adjusted to zip codes, and endured corporate wellness programs. But all of those things have been a part of my life for years, and even I have to admit the many positive aspects of some of these systems. A few examples:

--Kronos. Despised by the rank and file of companies that I've worked for, Kronos software contains many aspects and automates things that previously were done by people, mostly managers. I hated it, but I have to admit overall it made things more fair. Why? Well, say you have a workplace policy that mandates chronically-late employees be written up for tardiness and eventually fired if they don't shape up. What tended to happen at multiple companies I worked for was that managers would look the other way when their buddies were tardy, and write up people they didn't like. Kronos changed that, because the system automatically generated write-ups for any employee that clocked in late too many times. Kronos has no buddies. Popular, habitually-late people suffered, but it was more "fair" in the true sense of the word. Some systems, like Kronos, have both aspects that level the playing field and aspects (like increased scheduling "efficiency") that can victimize workers. O'Neil tends to harp on the negative only, and if you have not personally seen both sides of system, you might not realize there was another side at all.

--Increased police presence in high-crime areas. This one really grated me the wrong way. O'Neil positions this as something that victimizes the poor. Well I have been poor, or at least this country's version of it, and I have lived in very high crime areas where if you didn't shut your window at night chances were good you would hear a murder. And believe me when I say I was DEEPLY grateful for the increased police presence. But then, I wasn't committing crimes. Now I live in a very wealthy neighborhood (though I am not wealthy) where I have not seen a single police car drive down my street in the past four months. O'Neil argues that many crimes, like drug use by affluent college students, go unpunished because the police are busy in the poorer neighborhoods. I agree, but police resources are limited and for mercy's sake they should be sent where people are being killed, not where a college student is passed out in his living room. My current neighbors many be committing as many crimes as O'Neil implies, but I'm not terrified to walk down the street, so I don't mind the lack of police presence. I know officers have to go deal with the more life-threatening stuff, and I am grateful to them. It all depends on your perspective.

--Corporate Wellness programs. These programs have never done anything for me except shower me with gift cards and educate me on behavioral sleep therapy. I love them. But, again, perspective. I am not overweight, I love to work out, and I eat healthy. The programs were a source of income for me and my family when we needed it most. I just would have liked acknowledgement that wellness programs really do have benefits for some people, instead of a chapter painting them as some sort of 1984-style nightmare where we are all forced to be thin. It's more complicated than that.

--And the best for last: The psychology tests. Those things are pretty bad. Despite winning multiple Employee and Student of the Year awards in my life, I can't pass those tests. Not to save my life. I didn't think much of it, until I heard about another star employee how couldn't pass them either. Then I met a third star employee (and I am talking about an employee who won two JD Power Awards in two years) who couldn't pass them. Why? Picture holding a hundred quarters in your hands and then throwing them at a wall. Some will go off to one side and some to another, but most will probably cluster in the middle. Those tests keep the quarters in the middle, weeding out people who aren't typical. Sometimes that's good (deadbeats) sometimes that's bad (talented employees that think different). Here O'Neil misses an opportunity to convince owners of companies that the tests can cost them highly desirable employees. Offering real, concrete ideas of how the tests could be improved to benefit both workers and company owners would have been a harder book to write, but a much more useful one.

A lot of the ominous implications made in the book have to do with what MIGHT happen in the future, if certain systems become more common. O'Neil often uses blanket statements to imply that certain outcomes are inevitable, but that is far from true. Irritate enough people, and the systems change. Legal challenges are made and won. Some companies, eager to lure star workers, throw out some of the more punishing aspects of commonly-used systems (that happened at a company where I worked, where "life-style" scheduling that forbid clopening and gave you two days off in a row was used in conjunction with Kronos. Worked great, people loved it.). The biggest weapon against abuse is, as O'Neil repeatedly states, transparency. Having been in the industry that creates these algorithms, she is in a unique position to expose the finer details of how they work. But the book is short on the kind of details I personally crave and long on blanket statements and generalizations, the same kind of generalizations she denounces companies for making. Not all automated systems victimize the poor, not even the ones spotlighted in this book. I know because I lived them and I was poor.

I hovered on the edge of a four star rating for this book, until a chance conversation with a Japanese woman a couple days ago. Her grandmother had lost most of her possessions and land after World War II because of land redistribution. My friend was not complaining, she thought the reforms overall a good thing, though her family had lost a lot from it. "Something may benefit 99 people of of 100," she told me,"but there's always that one person...". Exactly. There's always that one person. These systems that have come to permeate our culture need to be tweaked to minimize injustice. Unlawful algorithms need to be outlawed. Bad ideas need to be replaced with good ones. And Cathy O'Neil does discuss this, especially in the conclusion, but for me the focus of the book wasn't on target. It was too slanted against systems I have seen both harm AND help. It over-simplified issues, at least for me. It's a mess out there, and solutions that work for everyone wickedly hard to come by.

Because there's always that one person.

GRADE: B-
Interesting side-note: In Greek Mythology, "Kronos" is the name of the Titan who devoured his own children. My co-workers always found that hilarious.
Esta avaliação foi considerada útil por 6 de 6 pessoa(s):
5.0 de 5 estrelas Marketing algorithms exposed! 7 de outubro de 2016
Por Kismet - Publicada na Amazon.com
Formato: eBook Kindle Compra verificada
Really, really excellent book on the use of Big Data in today's connected world. Author investigates policing, incarceration, schools, marketing, and the political process from the perspective of a data scientist who has seen how algorithms are used with unintended consequences. I especially like her push for the use of big data for justice and benefit to society instead of all the nefarious purposes it is put to now. Easy read.
Esta avaliação foi considerada útil por 3 de 3 pessoa(s):
5.0 de 5 estrelas A must read for all citizens of the digital era 22 de janeiro de 2017
Por Quinton Zondervan - Publicada na Amazon.com
Formato: eBook Kindle Compra verificada
In this excellent book the author clearly explains in layperson's terms how commercial and government data models are affecting our lives and in many cases ruining some lives. For example, she describes a computer algorithm that decides the faith of prisoners up for parole. We think it will be less biased than human decision makers, but in fact the bias can be encoded in the algorithm, and because its details are hidden, and because it drives positive feedback loops, it can create very unfair outcomes (e.g. if it's racially biased against blacks, more and more black people get snared in its trap, seemingly validating the bias). Every technology has potential downsides and upsides, and big data models are no exception. The first step is to understand what's going on, and this book is a great place to start. She also gives examples of how these models can and are being used for good and also some potential ways the bad models can be brought under control. No math or statistical knowledge is required to understand the book.
Esta avaliação foi considerada útil por 3 de 3 pessoa(s):
5.0 de 5 estrelas Important lessons for people developing business software 17 de novembro de 2016
Por PAL3 - Publicada na Amazon.com
Formato: eBook Kindle Compra verificada
This book is an important read for anyone in the business world. As a programmer with a moderate understanding of statistics I've seen companies put together statistically based "optimizing" algorithms with little or no thought given to the moral aspects of those algorithms. As Dr. O'Neil points out in this book, most of these algorithms are too small in scope to do serious damage. But one never really knows when some small project might take off and become a true "weapon of math destruction". I look at this book as a collection of examples of how such algorithms can go astray. Software developers would create better products if they learn about these examples and incorporate the lessons learned into each new project they tackle. At the end of the day you'll create better software if you understand the lessons presented in this book.
click to open popover