CuseCPT
All Conference
- Joined
- Aug 28, 2011
- Messages
- 3,212
- Like
- 7,240
What are the numbers based on?
What are the numbers based on?
Pff according to their site have 600 people with 10% able to grade film.. Lets say each person works 8 hrs days 5 days a week thats 2400 hrs to grade 60 games. so 4 hrs a game to grade 100 plays for 22 players so 2200 plays.. 240 min / 2200 plays thats 6 secs per play.. Each play gets reviewed from every angle so even less time?That’s why I’m such a big proponent of PFF on the football side. While I don’t think the same could be done very well in basketball, PFF, besides using stats, have analysts who actually watch every player every play and do an evaluation. They even have ex-NFL players do some of it especially the QB analysis. Looking at the chart above, does anyone really believe Sy is a better player than Judah?
Pff according to their site have 600 people with 10% able to grade film.. Lets say each person works 8 hrs days 5 days a week thats 2400 hrs to grade 60 games. so 4 hrs a game to grade 100 plays for 22 players so 2200 plays.. 240 min / 2200 plays thats 6 secs per play.. Each play gets reviewed from every angle so even less time?
how does this math work again?
for some plays you could review 2 players at once maybe with pass blocking, but then you also have to look at the play as whole to decide did the player do what he was supposed to and someone else made the real mistake?
maybe.. I read 600 employees. 10% do grades 2-3% review the final grades..I don’t have time to look it up right now but I’m not sure that’s how it works. I believe the 10% are the ones that assign the actual grade and then there’s a set of senior analysts that do the final review of the grade and approves it.
Does it rank teams? SU for season, and the period you’ve covered?I know some folks people love to crap on analytics yet SOMEHOW ...you look at the BPR (EvanMiya.com) numbers from Jan 12th vs Feb 16th and it tells you the whole story that anyone who knows what they are watching could tell you.
"Can't start Maliq" - Welp, we start him, somehow we are now better
Bell wasn't playing D. Now it's obvious he's gotten better than the numbers show that. (still not great, but better)
Obvious stuff can be said about Benny, but no need to pile on.
I'd say the one surprise to me is Sy's offensive numbers but there definitely has been games where things move better when he comes in. Less turnovers, better shots. NOT last game, but still overall.
JAN 12th
RANK NAME OBPR DBPR BPR 1 Jesse Edwards 2.08 1.18 3.27 2 Joseph Girard 2.13 0.22 2.35 3 Justin Taylor 0.91 1.20 2.10 4 Maliq Brown 1.17 0.90 2.06 5 Judah Mintz 1.59 0.30 1.90 6 Symir Torrence 0.64 0.55 1.19 7 John Bol Ajak -0.40 0.90 0.50 8 Benny Williams 0.24 0.22 0.45 9 Chris Bell 0.19 -0.01 0.18 10 Mounir Hima -0.50 0.49 -0.01
FEB 16th
RANK Name OBPR DBPR BPR 1 Jesse Edwards 2.25 1.13 3.37 2 Maliq Brown 1.88 0.86 2.74 3 Joseph Girard 2.36 0.17 2.53 4 Symir Torrence 1.64 0.47 2.12 5 Judah Mintz 1.70 0.32 2.03 6 Justin Taylor 1.00 0.50 1.51 7 Chris Bell 0.46 0.32 0.78 8 John Bol Ajak -0.43 0.93 0.51 9 Mounir Hima -0.28 0.66 0.38 10 Benny Williams 0.03 0.29 0.31
maybe.. I read 600 employees. 10% do grades 2-3% review the final grades..
Maybe I just am reading it wrong..
So a bunch watch it then 10% do something with that , I knew something was off.That’s what it says. The 10% assign the grades based on the analysis and stats done by others. You had said the 10% watch the film and like your math showed that’s pretty much impossible.
For the season, we are 82Does it rank teams? SU for season, and the period you’ve covered?
all of your post are solid Tom this one is spot on.I think the numbers for performance on offense have meaning and are worth looking at.
In my opinion, the defensive performance numbers for a zone team have always been and continue to be essentially meaningless.
They say JGIII is our best defensive player. I think Joe has improved defensively over time and is better defensively than many on the board think but he is at best an average defensive player.
Trying to grade players playing the 2-3 on defense is an exercise in futility. You really have to understand the 2-3, you have to understand the abilities of the players on offense and the game plan of the coaching staff to do this right.
And even if you have all that going for you, it is often unclear who is responsible for allowing the offense to score.
Example:
NC State moves the ball around the perimeter on offense, looking for a weakness. Jack Clark moves into the paint near the area where the ACC logo is. The guards do not drop down on him and remain focused on two players on the perimeter, just outside the 3 point circle.
Clark looks up, sees Jesse has moved up slightly but appears to be giving him the 13 footer he now has. He takes a dribble and steps closer to the basket. The guards remain focused on defending the perimeter. Jesse moves toward Clark and presents himself in defensive position.
Clark looks down low, where Brown is positioned to cut off a post to the low post to DJ Burns. He sees Joiner on the opposite side of the paint, covered by Chris Bell.
He sees right again and sees Jarkel Joiner uncovered deep in the right corner. He passes it to him and Joiner takes a relatively unguarded 3.
If he makes it, whose fault is it? I think a lot depends on what the staff has told Brown. The forwards (and the guards) are constantly asked to make choices and the choice they make is based on where players on offense are and what their skill set is.
The staff knows Burns has become a key part of the NC State offense and is lethal when he gets the ball in scoring position down low. They have likely told Brown to focus on Burns when he is posting low on his side, and leave the wing in the corner alone, unless it is Casey Morsell, the top outside shooter on the Wolfpack.
Brown might be doing exactly what he was coached to do, played perfect defense. If Jarkel makes the 3, it is not his fault. You could argue the guards should have made it harder to get the ball to Clark in the high post. You could argue Jesse should have been more aggressive defending Clark and got in his face immediately.
Things are not in black and white with zone defense. Defenders need to make informed decisions in a split second based on who is where and what their skill sets are.
It is a complex equation and no one should expect some outsider grading game film to be able to make informed decisions on responsibility for defensive gaffes.
Yes, there will be some things that are obvious, but a lot of the grading is going to be a best guess, where the guessing is not going to be very informed.
This will sound racist, but I love the Bayesians.BPR (Bayesian Performance Rating), a metric that predicts overall player impact based on individual stats, team success when on the court,
umm it says joe is the worst defensive player...which is obvious.I think the numbers for performance on offense have meaning and are worth looking at.
In my opinion, the defensive performance numbers for a zone team have always been and continue to be essentially meaningless.
They say JGIII is our best defensive player. I think Joe has improved defensively over time and is better defensively than many on the board think but he is at best an average defensive player.
Trying to grade players playing the 2-3 on defense is an exercise in futility. You really have to understand the 2-3, you have to understand the abilities of the players on offense and the game plan of the coaching staff to do this right.
And even if you have all that going for you, it is often unclear who is responsible for allowing the offense to score.
Example:
NC State moves the ball around the perimeter on offense, looking for a weakness. Jack Clark moves into the paint near the area where the ACC logo is. The guards do not drop down on him and remain focused on two players on the perimeter, just outside the 3 point circle.
Clark looks up, sees Jesse has moved up slightly but appears to be giving him the 13 footer he now has. He takes a dribble and steps closer to the basket. The guards remain focused on defending the perimeter. Jesse moves toward Clark and presents himself in defensive position.
Clark looks down low, where Brown is positioned to cut off a post to the low post to DJ Burns. He sees Joiner on the opposite side of the paint, covered by Chris Bell.
He sees right again and sees Jarkel Joiner uncovered deep in the right corner. He passes it to him and Joiner takes a relatively unguarded 3.
If he makes it, whose fault is it? I think a lot depends on what the staff has told Brown. The forwards (and the guards) are constantly asked to make choices and the choice they make is based on where players on offense are and what their skill set is.
The staff knows Burns has become a key part of the NC State offense and is lethal when he gets the ball in scoring position down low. They have likely told Brown to focus on Burns when he is posting low on his side, and leave the wing in the corner alone, unless it is Casey Morsell, the top outside shooter on the Wolfpack.
Brown might be doing exactly what he was coached to do, played perfect defense. If Jarkel makes the 3, it is not his fault. You could argue the guards should have made it harder to get the ball to Clark in the high post. You could argue Jesse should have been more aggressive defending Clark and got in his face immediately.
Things are not in black and white with zone defense. Defenders need to make informed decisions in a split second based on who is where and what their skill sets are.
It is a complex equation and no one should expect some outsider grading game film to be able to make informed decisions on responsibility for defensive gaffes.
Yes, there will be some things that are obvious, but a lot of the grading is going to be a best guess, where the guessing is not going to be very informed.
This will sound racist, but I love the Bayesians.
They're kinda twitchy though. You meet a statistician, won't take you long to figure out if they're a Bayesian.
They got that twitch about them.
Thanks for doing that, but I would question the veracity of that metric as a result. + .5 over the last 30 days? Me thinks notFor the season, we are 82
They don't show rank changes, but in the past 30 days, our BPR has improved by .5
Bayesian analysis here points to Joe as the worst defender (lower number = worse defenive rating)I think the numbers for performance on offense have meaning and are worth looking at.
In my opinion, the defensive performance numbers for a zone team have always been and continue to be essentially meaningless.
They say JGIII is our best defensive player. I think Joe has improved defensively over time and is better defensively than many on the board think but he is at best an average defensive player.
Trying to grade players playing the 2-3 on defense is an exercise in futility. You really have to understand the 2-3, you have to understand the abilities of the players on offense and the game plan of the coaching staff to do this right.
And even if you have all that going for you, it is often unclear who is responsible for allowing the offense to score.
Example:
NC State moves the ball around the perimeter on offense, looking for a weakness. Jack Clark moves into the paint near the area where the ACC logo is. The guards do not drop down on him and remain focused on two players on the perimeter, just outside the 3 point circle.
Clark looks up, sees Jesse has moved up slightly but appears to be giving him the 13 footer he now has. He takes a dribble and steps closer to the basket. The guards remain focused on defending the perimeter. Jesse moves toward Clark and presents himself in defensive position.
Clark looks down low, where Brown is positioned to cut off a post to the low post to DJ Burns. He sees Joiner on the opposite side of the paint, covered by Chris Bell.
He sees right again and sees Jarkel Joiner uncovered deep in the right corner. He passes it to him and Joiner takes a relatively unguarded 3.
If he makes it, whose fault is it? I think a lot depends on what the staff has told Brown. The forwards (and the guards) are constantly asked to make choices and the choice they make is based on where players on offense are and what their skill set is.
The staff knows Burns has become a key part of the NC State offense and is lethal when he gets the ball in scoring position down low. They have likely told Brown to focus on Burns when he is posting low on his side, and leave the wing in the corner alone, unless it is Casey Morsell, the top outside shooter on the Wolfpack.
Brown might be doing exactly what he was coached to do, played perfect defense. If Jarkel makes the 3, it is not his fault. You could argue the guards should have made it harder to get the ball to Clark in the high post. You could argue Jesse should have been more aggressive defending Clark and got in his face immediately.
Things are not in black and white with zone defense. Defenders need to make informed decisions in a split second based on who is where and what their skill sets are.
It is a complex equation and no one should expect some outsider grading game film to be able to make informed decisions on responsibility for defensive gaffes.
Yes, there will be some things that are obvious, but a lot of the grading is going to be a best guess, where the guessing is not going to be very informed.
Bayesian analysis is a probabilistic approach to statistical analysis. In brief: one uses a distribution of historical data on the parameter of interest (say, assist %) and then applies this to estimate likelihood of future outcome for the same parameter (usually within a range). The trailing data is called a 'posterior distribution' and is either actually available (which is obviously true for NCAA BB) or estimated by a likelihood model (usually some variation of a Monte Carlo cloud simulation).I did. And I couldn’t find, at first try, how the numbers are developed. Just words with no math.
Who said this?Yet, some argue that Jimmy starting over Benny last season was nepotism. Proof of the existence of alternative universes.
Bayesian analysis is a probabilistic approach to statistical analysis. In brief: one uses a distribution of historical data on the parameter of interest (say, assist %) and then applies this to estimate likelihood of future outcome for the same parameter (usually within a range). The trailing data is called a 'posterior distribution' and is either actually available (which is obviously true for NCAA BB) or estimated by a likelihood model (usually some variation of a Monte Carlo cloud simulation).
Robust and well-established methodology for (1) looking at past data, (2) establishing reasonable ranges a parameter will fall into with high probability, and then (3) assign a probability that for a future event the parameter will fall into a certain range with high probability. So, by comparing a player to others based on past data, adjusting for schedule / opponent strength and other factors, one can use the vast body of box score data to predict a player's performance - and value - going forward against specific opponents and the schedule writ large.
Hope this helps. Stata has good primer on Bayesian analysis.
The fault lies not in the Bayesian analysis, but in the weighting of various parameters by whoever designed the model. My sense of Miyzawa's BPR is that it 'overweights' missed field goals and may discard too many possessions it deems as 'unhelpful' (e.g, when a game is out of hand). A fairly obvious problem emerges: players on teams that experience lots of blowouts are harder to measure, and players who are asked to do too much - or players who force the action (e.g. freshman guards) - are penalized excessively.Bayesian stuff shows Judah as the fifth best player on Cuse and 4th best offensive player. Take it with a heavy grain of salt
Likewise, hope you're doing well. Please do PM me - I look forward to reading.Hey buddy. Long time no see. I’ll have to PM you with the latest update. Hope all is well.
Yes, with one caveat: the low frequency of encountering zones injects some risk into predicting future performance against a zone using the entire body of data. The easy way to counter this: model using only the data against zones. I imagine you'll find enough data to model, and if not: run a Monte Carlo with - I think - some Gibbs sampling (I could be wrong here, it's been a while since I built a model myself). This should yield a 'cloud' of data that minimizes outliers and increases sample size to better 'feed' the predicted range. The Gibbs sampling lets you test each variable / input for fit with the hypothesis you're testing - in essence, asking 'Is this data point valid for us in my against-the-zone D performance prediction?'So, MCC I think it's safe to say you have a pretty good understanding of what's going on behind the mathematical curtain on these metrics. Is it fair to say that that the beauty of an analysis like this is that the numbers don't really care whether you are playing zone, man-to-man, or fullcourt pressing, the numbers are what they are? Or, to put it another way, the proof is in the pudding?
Good info here, thanks for sharing.I know some folks people love to crap on analytics yet SOMEHOW ...you look at the BPR (EvanMiya.com) numbers from Jan 12th vs Feb 16th and it tells you the whole story that anyone who knows what they are watching could tell you.
"Can't start Maliq" - Welp, we start him, somehow we are now better
Bell wasn't playing D. Now it's obvious he's gotten better than the numbers show that. (still not great, but better)
Obvious stuff can be said about Benny, but no need to pile on.
I'd say the one surprise to me is Sy's offensive numbers but there definitely has been games where things move better when he comes in. Less turnovers, better shots. NOT last game, but still overall.
JAN 12th
RANK NAME OBPR DBPR BPR 1 Jesse Edwards 2.08 1.18 3.27 2 Joseph Girard 2.13 0.22 2.35 3 Justin Taylor 0.91 1.20 2.10 4 Maliq Brown 1.17 0.90 2.06 5 Judah Mintz 1.59 0.30 1.90 6 Symir Torrence 0.64 0.55 1.19 7 John Bol Ajak -0.40 0.90 0.50 8 Benny Williams 0.24 0.22 0.45 9 Chris Bell 0.19 -0.01 0.18 10 Mounir Hima -0.50 0.49 -0.01
FEB 16th
RANK Name OBPR DBPR BPR 1 Jesse Edwards 2.25 1.13 3.37 2 Maliq Brown 1.88 0.86 2.74 3 Joseph Girard 2.36 0.17 2.53 4 Symir Torrence 1.64 0.47 2.12 5 Judah Mintz 1.70 0.32 2.03 6 Justin Taylor 1.00 0.50 1.51 7 Chris Bell 0.46 0.32 0.78 8 John Bol Ajak -0.43 0.93 0.51 9 Mounir Hima -0.28 0.66 0.38 10 Benny Williams 0.03 0.29 0.31
Yes, with one caveat: the low frequency of encountering zones injects some risk into predicting future performance against a zone using the entire body of data. The easy way to counter this: model using only the data against zones. I imagine you'll find enough data to model, and if not: run a Monte Carlo with - I think - some Gibbs sampling (I could be wrong here, it's been a while since I built a model myself). This should yield a 'cloud' of data that minimizes outliers and increases sample size to better 'feed' the predicted range. The Gibbs sampling lets you test each variable / input for fit with the hypothesis you're testing - in essence, asking 'Is this data point valid for us in my against-the-zone D performance prediction?'
Alternatively, you could smooth the general dataset out using a zone-based normalizing algorithm.