(Updated and edited, 8-9-15)
Or tried, as it wouldn’t post. Instead, bizarrely-but perhaps appropriately for a study that gives the Dolphins more than a 700% greater chance of making the playoffs than the Ravens-something else happened.
Remember, it’s just a comment, and unedited. (Also, note the study predicted a Seahawks-Dolphins Super Bowl, which is how I made the “second highest playoff chance” error. The Packers were given the second highest playoff chances. The Dolphins were third; but highest in the AFC.) Here it is:
Of course he’s not predicting who will make the playoffs. That’s what “X” chances by definition means. But all of the issues raised equally apply to the percentage chances concluded, if not more so.
The fact that it’s “data” that was used does not mean that the inherent assumptions that go into choosing which data and how to weigh it, nor the decisional and necessary omissions, yields a good result or even reasonable result.
And these results are not good.
Of course there is no way to “prove” that. Nor does the Titans, for example failing to make the playoffs – as they probably will – nor the Seahawks making the playoffs – as they probably will – mean that the probabilities of 2% and 95% respectively, represent reasonable estimations (so far as they exist in terms of something no one can know) rather than not. (And vice versa.)
There are numerous glaring flaws in the conclusions, regardless of how arrived at and regardless of the fact that data was used, so picking out any is almost pointless. But one of the ones raised above is a good one – the Falcons are an unknown with a new coach and some mid level changes. Yes they’re in a bad division, but the idea that the Ravens are just under 1 in 10 to make the playoffs and the Falcons 50-50 borders on the ridiculous. (And for the same reasons given above would remain so even if Atlanta DOES make the playoffs, and Baltimore does not.)
I kind of like the dark horse Dolphins pick, but giving them the second highest chance of making the playoffs is also ridiculous.
Interesting study though, and fun to consider. It will also be interesting to look back upon as the season unfolds.
But it was tagged as spam. And from a filter that may have made a mistake, instead of an apology or something neutral in case it had (and not only was the comment obviously not spam, I hadn’t even commented on the site before, or if I had it was minimal, and quite some time ago), appeared this:
“ERROR: Your comment appears to be spam. We don’t really appreciate spam here.“
Since apparently a minor insult just isn’t enough, and regardless of the fact that filters can not only catch things that aren’t spam but also wind up wasting the commenter’s time as a result, the above reply was followed by:
“Go back and post something useful.”
This treats the comment and auto response not as assumption, but instead a conclusion, with nothing but a robotic error riddled program driving it (somewhat like the subject “prediction” study of the article itself, ironically enough), that the attempted comment is – not may be – spam. And, for good measure, adds a double if mild veiled insult: “We really don’t appreciate spam. Now go post something useful.”
That’s a big leap. Not knowing the difference, or being unwilling to recognize it, between presumption and fact is a pretty big mistake for any college. But then Harvard is, after all, considered one of the very worst in the land, so perhaps it’s understandable.
A small irony is I almost went to Harvard and wrestled there, and as things turn out, in probably the first big mistake of my life, did not. I still regret the decision, even if this HSAC study, interesting nature of it aside, and it’s “Hal” like spam machinery, seems to botch some things.
After Seattle beats the Dolphins in a close Super Bowl in February 2016 (yeah, right), I’ll stand corrected. But seriously, these are, of course, probability assessments, which is why they aren’t only hard to assess before the fact, they’re almost as hard to assess afterward. (For example, what ultimately happens in each team’s case doesn’t prove whether the initial probability assessment was right, mildly flawed, or awful.)
…That is, unless the general set of projected probabilities, lined up against actual season outcomes and divergence away from expectation, is either stunning good or stunningly bad. Which we may well see turn out to be the case with respect to this study. (See below.)
Still. I wrote a detailed piece a few weeks into last season illustrating why the “2%” Titans coaching switch from Mike Munchak to Ken Whisenhunt was a bad move, and the team proceeded to (still surprisingly) remain horrible throughout the entire year: Losing 9 games out of their 14 total losses, by at least 14 points or more.
But the Titans have some solid players, and the NFL has a lot of variance as well as some general unpredictability, and the team could jell.
We also got a little spoiled on QBs coming into the league as rookies and doing fairly well the last few years, and it’s still kind of a long shot. (And I argued the Titans, in need of a QB or not, should have taken advantage of their fortuitous number two pick and traded it away to deeply build the team.) But Marcus Mariota might deliver, and, who knows, they might just surprise enough to make the playoffs.
Long shot, but “it sure ain’t as low as a one in fiddy chance.” The NFL is too unpredictable. And, Colts aside, the AFC South is a relatively weak division. Not only that, this year the AFC South plays the AFC East, which while it’s expected to be better, wasn’t a total powerhouse last year.
And it plays the division many call the worst in football (though I think the AFC South, with possibly the two worst teams in football last year, night have qualified) – the NFC South. Whisenhunt also had a losing record before joining the team, but it wasn’t dramatically under .500; and courtesy of Kurt Warner, Anquan, and Fitz, did take his team to a Super Bowl.
Within the next several days I’ll post some season probability odds right alongside the SA Collective predictions; based on general team assessment, and zero modeling. (Update: teams 1-10, 11-20, 21-32 – some of Harvard’s numbers are already looking ridiculous – and why the study’s no good, here.) It’ll be interesting later to compare how each team ultimately winds up at the end of the season – record and proxmity to playoffs wise – in comparison with their projected probability chances under the HSAC study, versus the chances to be shortly posted here.
Harvard, game on. To bad there isn’t an easy way to do this objectively, and we could put a fun embarassing wager on it; something like if the backers of the study lose they have to run twice around Harvard Square naked (and sober) with “Yale Rocks!” painted on their chest during class sessions or something.