Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: on sabermetrics

  1. on sabermetrics

    I just read this piece on Grantland: http://www.grantland.com/story/_/id/...e-math-problem

    It's pretty good. The general idea of it is that we may be relying on "the math" a little too much to evaluate these teams and players, which is creating a huge blind spot.

    For example, Daryl Morey is often cited as a revolutionary GM in the NBA, pushing the boundaries of sabermetrics. But his team hasn't been noticeably better than they were before. Denver hired the guy who write "Basketball on Paper", what have they done?

    I own Basketball on Paper. I've read it. It's a fine book, very enlightening, very insightful. But all the teams that have won the past few years have won by having one+ star player and molding into a cohesive team when it counted. In other words... the exact same reason why every team has won, ever. How does translating Basketball on Paper to general managing win rings?

    As the author points out, the Mavs were outgunned by every team they played in the postseason. Even Portland. And the Phillies, last year, should have wiped the floor with the Giants. They were the far superior team. Yet it wasn't just that the Giants won.. it's that the series wasn't even close. Was it just because the Giants "got hot"? That's the point.

    I fear we're getting to a point in "sabermetrics" where the same douchebags that wrecked finance in the 2000's are now moving to the sports world. Actually, in the case of the Rays, it is literally true.

    So what are peoples' feelings on sabermetrics?
    Last edited by Diff-chan; 28 Jun 2011 at 01:14 PM.

  2. I really only know about baseball sabermetrics and I think they're pretty valuable. That being said, I don't think that it's a "tell all" way of completely evaluating a player and/or a team. The FIP (or DICE or...) stat is generally a good way to evaluate a pitcher and has been a good indication of a pitcher playing above or below his skill level and how it will likely even out (Barry Zito as an overperformer while Jonathon Sanchez would be an underperformer).

    There may be a stat for it but I've worked around a little bit with the math trying to calculate team/player match-ups based upon the comparison of overall skills in different areas. I think it's beyond my current knowledge and time, but I'm quite confident that such a comparison could yield pretty good results when determining whether a team will win or not. Basically something like this (this is just an example, not a worked out solution):

    Team A has a fielding rating of 92 (things such as range factor, etc. calculated overall and then given a 0-100 number based on league averages), a batting rating of 70 (similar calculations) and a pitching rating of 91 (FIP mostly involved and again averaged). When game time comes along, such ratings are subject to change depending on who starts the game and who may later enter the game. This could be equalized by examining team strategies based upon the likely course the early game will take. Also, depending on the fielding rating, a pitcher with low strike out totals and low walk totals could still be expected to keep runners off of base at an above average rate.

    Team B has a fielding rating of 85, a batting rating of 80 and a pitching rating of 79.

    Match these two sets of numbers up. Fielding rates could be given alongside pitching rates to determine the overall value both ratings have to the team. On the other hand, batting rates of the opposing team could be given alongside pitching rates of the other team to determine overall values. Put all these numbers together and you should get a pretty good idea of who should win any given contest. It's probably pretty confusing the way I presented it but given some more time, I think I can flesh things out a bit. Why I decided to explain this here, I'm not entirely sure...Somebody else probably does this anyway.

    Basically, I think sabermetrics are a good evaluation tool but can't show things such as mental fortitude of certain players under different scenarios. Also, because a lot of luck is involved in baseball, the scale can be dramatically tipped in a smaller sample size.

    Also, I don't think the Phillies-Giants series was very far apart. The Giants did win in 6 games but they were outscored by the Phillies overall. They did get "hot" but most of the games were pretty close.
    http://www.the-nextlevel.com/board/image.php?type=sigpic&userid=1739&dateline=1225393453

  3. #3
    I think the article is pretty weak but I agree with it in spirit.
    Pete DeBoer's Tie
    There are no rules, only consequences.

  4. Gohron, baseball is best suited to sabermetrics. It's a bit more individualized, and proceeds in a more or less linear fashion, where a single player can take "credit" for what's happening, which means everything can be wrapped up into a model.

    That's a unique sport though. Basketball is nonlinear and free-flowing, where stuff happens away from the rock that can greatly affect the game (think of a pick & roll). I'm not sure you can capture that sport in a model. Look at what happened when the Celts dealt Perkins. That destroyed that team, probably for good. Ainge should be fired for that move. But all the numbers looked good.

    And forget about football. Basically, outside of one sport, where it has been fairly useful (but it must be taken into account that the last 2 WS winners were old-school: big money in the case of the damned Yankees, and overpay-for-old-people-and-bums in the case of the Giants), the whole "statistical revolution" might just be a waste of time.

  5. I think two other things about baseball is that it was more popular than the other sports during the dawn of sabermetrics and there is no continuous variable like time (instead we substitute estimates like "possessions"). Data capture is still the expensive part.

    As we gain the ability to capture more data, these other sports can be analyzed in more than a trivial way. Numbers can lie and so can your eyes. But I agree with people in the cynical view of we like to look at the numbers because it's easy to cover your butt by saying, "but the numbers!" and not lose your job. The answer "but my eyes!" is met with more disdain from the people who make the big decisions. Sometimes numbers get treated as gospel and many times those people don't use the numbers properly.

    So the real question is, are these models met with any real scrutiny? Anyone can look at some variables and make a model. Fewer people, but still many, can explain it to enough people to make it 'true'. But do we have enough historical data (specifically in football and basketball) to test these models with out of sample data?

    This quote:
    The variables don't matter nearly as much as we think.

    I don't know if that's as accurate as 'there are more variables than we are considering'

    It's met with a sidenote in the article. But data analysis requires a lot of data AND the ability to (correctly) decide what is important. I think with basketball we have little data and so when analyzing it, we don't have the stuff that is important. I know there are things being used like player tracking, but it's still in its infancy. It takes time.

    So as for Dallas' finals run, there was no data on Barea. I think you need better analysis, which is followed by better data, which is followed by better analysis...Oh and even then you're basing this success on one series with Miami. So even if the data was great and your model was great, so what? Miami choked in Game 2 and I dare anyone to tell me that if they would have won that game, they still wouldn't have won the Finals.
    Last edited by Joust Williams; 23 Jun 2012 at 10:12 PM.

  6. #6
    All models are wrong; some are useful.

  7. Of course. There is no such thing as a perfect fit. But we are in an age where there are so many of them and most of them don't come under sufficient scrutiny.

    There's a difference between math and bad math.

  8. #8
    Bad math isn't even the problem. It's lack of understanding of the math. If a model is 70% accurate, and the people using it don't understand the other 30%, that's the problem. For super simplicity sake, let's say in one of the actuarial models diff uses, survival ages were perfect for car accidents, murder, pneumonia, etc. but left cancer out. If the insurance companies were selling policies that didn't take cancer into account, that's pretty awful if they didn't understand that.

  9. That's what I said in the first place And by bad math, that's what I mean. I don't mean "forgot to carry the 1".

    Quite honestly, there are a lot of people that rely on "the math" that don't understand what they're relying on. Relying on the math is like some sort of smoke screen, but that isn't math's fault (forget about the philosophical question of what math is in the first place). We just don't know enough yet and we don't necessarily know what we probably would need to know. If that makes sense.
    Last edited by Joust Williams; 24 Jun 2012 at 12:46 PM.

  10. #10
    Yep. I don't know about diff or kof, but some of the execs I work for honestly think I can conjure shit out of the air with statistics, like I'm using the Force or some shit. "We know you don't have any clean data whatsoever, but you can tell us the odds with 99% certainty, right?" If you have to deal with insanity, it might as well be flattering I suppose.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Games.com logo