Well... the answer may well be: 
a lot!
 This fact emerged from experiments carried out by sociologists Duncan 
J. Watts, Mathew Sagalnik and Peter Dodds which Watts described in his 
2011 book, “Everything is Obvious* (*Once You Know the Answer)”. Their 
work focuses on online markets. This post of mine is based on Robert H. 
Frank’s article on this subject that appeared in The New York Times of 
August 5, 2012.
Franks describes the researchers 
experiments thus: They invited subjects to a temporary web site called 
Music Lab.  The site listed 48 recordings by little-known indie bands. 
The subjects could download any of the songs free if they agreed to give
 a quality rating after listening. The average of these ratings for each
 song then became a proxy for the song’s quality in the rest of the 
experiment. Importantly, since each subject saw no information other 
than the name of the bands and the songs, their ratings of each were 
completely independent of reactions of other subjects.
What does this have to do with marketing your book on say---Amazon?
Think of song ratings in this experiment as the rating that readers give to your book newly added to Amazon. 
The
 result of the first phase of the experiment was that the independent 
ratings were extremely variable. Some songs got mostly high marks or 
mostly low marks, but a substantially larger number received distinctly 
mixed reviews. 
The second phase of the experiment 
differed from the first phase in that two new pieces of information were
 added to each band and song title: 1) how many times each song had been
 downloaded by others, and 2) the average rating it had received so far.
 Eight separate sessions followed this protocol.
This phase is exactly what happens when readers consider whether or not to buy a book on Amazon based on early reviewer ratings. 
The 
result of this second phase was that this social feedback caused a 
sharply higher inequality in song ratings and download frequencies. The 
most popular songs were a lot more popular, and the least popular songs 
were a lot less popular than the same songs rated in Phase 1. Also, each
 of the eight sessions which involved discrete subsets of subjects, 
displayed enormous variability in popularity rankings. An example cited 
in Frank’s article highlighted the results for a song called “Lockdown” 
by the band 52 Metro. “Lockdown” was ranked 26th out of 48 songs by the 
subjects who rated songs in the first phase without any information 
other than the name of the song and the band. But in the second phase 
which added the ratings given by other subjects in the group, “Lockdown”
 achieved a ranking of #1 in one of the eight groups, and #40 in 
another!
This is the crux of the issue when a potential buyer considers the ratings given by earlier reviewers and sees nothing but 3's or less for example. What are the chances that s/he will buy? 
So, if “Lockdown” (in the middle of the 
objective ranks of quality produced in Phase 1) happened to experience 
the dynamics of the group that ranked it #1 with the social feedback 
information added it would have been miraculously transformed into the 
best song of the lot.  If it experienced the dynamics of the group that 
ranked it #40, it would be the worst.  Presumedly, sales is much 
affected by perceived quality. What dynamics, you may say? Well, the 
dynamics of the sequencing of the ratings, that is, do the early ratings
 reflect favorably, unfavorably or neither, for instance. If the 
sequence of ratings is completely random, a big if, it means that luck 
is everything.
So, if early reviewers are ecstatic about your book, you have a much better chance of making the sale when subsequent potential buyers consider it based on these high rankings. Nothing new about this.
As Frank points out, “the most striking
 finding was that if a few early listeners disliked the song, that 
usually spelled its doom.  But if a few early listeners happened to like
 the same song, it often went on to succeed. In their experiments, the 
sociologist researchers showed how feedback could be a vitally important
 random effect. Any random differences in the early feedback we receive 
tend to be amplified as we share our reactions with others. Early 
success —even if unearned — breeds further success, and early failure 
breeds further failure. The upshot is that the fate of products in 
general– but especially of those in the intermediate-quality range – 
often entails an enormous element of luck.  Chance elements in the 
information flows that promote success are sometimes the most important 
random factors of all.”
The lesson here would seem to be that independent of how good your book is, it's fate in the Amazon market place is determined largely by lots of early ratings that just might be extraordinarily bad for no other reason than the fact that of all the ratings your book will receive, the early bad ones begot more bad ones. Of course, the opposite could happen too; good early ratings that inflate the perceived quality of your book.
.