BPR Numbers as of 2/16/23 | Page 2 | Syracusefan.com

BPR Numbers as of 2/16/23

What are the numbers based on?
revenge-of-the-nerds-nerds.gif
 
That’s why I’m such a big proponent of PFF on the football side. While I don’t think the same could be done very well in basketball, PFF, besides using stats, have analysts who actually watch every player every play and do an evaluation. They even have ex-NFL players do some of it especially the QB analysis. Looking at the chart above, does anyone really believe Sy is a better player than Judah?
Pff according to their site have 600 people with 10% able to grade film.. Lets say each person works 8 hrs days 5 days a week thats 2400 hrs to grade 60 games. so 4 hrs a game to grade 100 plays for 22 players so 2200 plays.. 240 min / 2200 plays thats 6 secs per play.. Each play gets reviewed from every angle so even less time?

how does this math work again?

for some plays you could review 2 players at once maybe with pass blocking, but then you also have to look at the play as whole to decide did the player do what he was supposed to and someone else made the real mistake?
 
Pff according to their site have 600 people with 10% able to grade film.. Lets say each person works 8 hrs days 5 days a week thats 2400 hrs to grade 60 games. so 4 hrs a game to grade 100 plays for 22 players so 2200 plays.. 240 min / 2200 plays thats 6 secs per play.. Each play gets reviewed from every angle so even less time?

how does this math work again?

for some plays you could review 2 players at once maybe with pass blocking, but then you also have to look at the play as whole to decide did the player do what he was supposed to and someone else made the real mistake?

I don’t have time to look it up right now but I’m not sure that’s how it works. I believe the 10% are the ones that assign the actual grade and then there’s a set of senior analysts that do the final review of the grade and approves it.
 
I don’t have time to look it up right now but I’m not sure that’s how it works. I believe the 10% are the ones that assign the actual grade and then there’s a set of senior analysts that do the final review of the grade and approves it.
maybe.. I read 600 employees. 10% do grades 2-3% review the final grades..

Maybe I just am reading it wrong..
 
I know some folks people love to crap on analytics yet SOMEHOW ...you look at the BPR (EvanMiya.com) numbers from Jan 12th vs Feb 16th and it tells you the whole story that anyone who knows what they are watching could tell you.

"Can't start Maliq" - Welp, we start him, somehow we are now better
Bell wasn't playing D. Now it's obvious he's gotten better than the numbers show that. (still not great, but better)
Obvious stuff can be said about Benny, but no need to pile on.

I'd say the one surprise to me is Sy's offensive numbers but there definitely has been games where things move better when he comes in. Less turnovers, better shots. NOT last game, but still overall.

JAN 12th
RANKNAMEOBPRDBPRBPR
1Jesse Edwards2.081.183.27
2Joseph Girard2.130.222.35
3Justin Taylor0.911.202.10
4Maliq Brown1.170.902.06
5Judah Mintz1.590.301.90
6Symir Torrence0.640.551.19
7John Bol Ajak-0.400.900.50
8Benny Williams0.240.220.45
9Chris Bell0.19-0.010.18
10Mounir Hima-0.500.49-0.01

FEB 16th

RANKNameOBPRDBPRBPR
1Jesse Edwards2.251.133.37
2Maliq Brown1.880.862.74
3Joseph Girard2.360.172.53
4Symir Torrence1.640.472.12
5Judah Mintz1.700.322.03
6Justin Taylor1.000.501.51
7Chris Bell0.460.320.78
8John Bol Ajak-0.430.930.51
9Mounir Hima-0.280.660.38
10Benny Williams0.030.290.31
Does it rank teams? SU for season, and the period you’ve covered?
 
maybe.. I read 600 employees. 10% do grades 2-3% review the final grades..

Maybe I just am reading it wrong..

That’s what it says. The 10% assign the grades based on the analysis and stats done by others. You had said the 10% watch the film and like your math showed that’s pretty much impossible.
 
That’s what it says. The 10% assign the grades based on the analysis and stats done by others. You had said the 10% watch the film and like your math showed that’s pretty much impossible.
So a bunch watch it then 10% do something with that , I knew something was off.

you do wonder how much time they spend on plays though.

and from people i have talked to they get all 22 for the pros but not for all the college games. I would think some schools would provide it?

maybe SWC could sneak the question into coach next yr.
 
I think the numbers for performance on offense have meaning and are worth looking at.

In my opinion, the defensive performance numbers for a zone team have always been and continue to be essentially meaningless.

They say JGIII is our best defensive player. I think Joe has improved defensively over time and is better defensively than many on the board think but he is at best an average defensive player.

Trying to grade players playing the 2-3 on defense is an exercise in futility. You really have to understand the 2-3, you have to understand the abilities of the players on offense and the game plan of the coaching staff to do this right.

And even if you have all that going for you, it is often unclear who is responsible for allowing the offense to score.

Example:

NC State moves the ball around the perimeter on offense, looking for a weakness. Jack Clark moves into the paint near the area where the ACC logo is. The guards do not drop down on him and remain focused on two players on the perimeter, just outside the 3 point circle.

Clark looks up, sees Jesse has moved up slightly but appears to be giving him the 13 footer he now has. He takes a dribble and steps closer to the basket. The guards remain focused on defending the perimeter. Jesse moves toward Clark and presents himself in defensive position.

Clark looks down low, where Brown is positioned to cut off a post to the low post to DJ Burns. He sees Joiner on the opposite side of the paint, covered by Chris Bell.

He sees right again and sees Jarkel Joiner uncovered deep in the right corner. He passes it to him and Joiner takes a relatively unguarded 3.

If he makes it, whose fault is it? I think a lot depends on what the staff has told Brown. The forwards (and the guards) are constantly asked to make choices and the choice they make is based on where players on offense are and what their skill set is.

The staff knows Burns has become a key part of the NC State offense and is lethal when he gets the ball in scoring position down low. They have likely told Brown to focus on Burns when he is posting low on his side, and leave the wing in the corner alone, unless it is Casey Morsell, the top outside shooter on the Wolfpack.

Brown might be doing exactly what he was coached to do, played perfect defense. If Jarkel makes the 3, it is not his fault. You could argue the guards should have made it harder to get the ball to Clark in the high post. You could argue Jesse should have been more aggressive defending Clark and got in his face immediately.

Things are not in black and white with zone defense. Defenders need to make informed decisions in a split second based on who is where and what their skill sets are.

It is a complex equation and no one should expect some outsider grading game film to be able to make informed decisions on responsibility for defensive gaffes.

Yes, there will be some things that are obvious, but a lot of the grading is going to be a best guess, where the guessing is not going to be very informed.
all of your post are solid Tom this one is spot on.
 
BPR (Bayesian Performance Rating), a metric that predicts overall player impact based on individual stats, team success when on the court,
This will sound racist, but I love the Bayesians.

They're kinda twitchy though. You meet a statistician, won't take you long to figure out if they're a Bayesian.

They got that twitch about them.
 
I think the numbers for performance on offense have meaning and are worth looking at.

In my opinion, the defensive performance numbers for a zone team have always been and continue to be essentially meaningless.

They say JGIII is our best defensive player. I think Joe has improved defensively over time and is better defensively than many on the board think but he is at best an average defensive player.

Trying to grade players playing the 2-3 on defense is an exercise in futility. You really have to understand the 2-3, you have to understand the abilities of the players on offense and the game plan of the coaching staff to do this right.

And even if you have all that going for you, it is often unclear who is responsible for allowing the offense to score.

Example:

NC State moves the ball around the perimeter on offense, looking for a weakness. Jack Clark moves into the paint near the area where the ACC logo is. The guards do not drop down on him and remain focused on two players on the perimeter, just outside the 3 point circle.

Clark looks up, sees Jesse has moved up slightly but appears to be giving him the 13 footer he now has. He takes a dribble and steps closer to the basket. The guards remain focused on defending the perimeter. Jesse moves toward Clark and presents himself in defensive position.

Clark looks down low, where Brown is positioned to cut off a post to the low post to DJ Burns. He sees Joiner on the opposite side of the paint, covered by Chris Bell.

He sees right again and sees Jarkel Joiner uncovered deep in the right corner. He passes it to him and Joiner takes a relatively unguarded 3.

If he makes it, whose fault is it? I think a lot depends on what the staff has told Brown. The forwards (and the guards) are constantly asked to make choices and the choice they make is based on where players on offense are and what their skill set is.

The staff knows Burns has become a key part of the NC State offense and is lethal when he gets the ball in scoring position down low. They have likely told Brown to focus on Burns when he is posting low on his side, and leave the wing in the corner alone, unless it is Casey Morsell, the top outside shooter on the Wolfpack.

Brown might be doing exactly what he was coached to do, played perfect defense. If Jarkel makes the 3, it is not his fault. You could argue the guards should have made it harder to get the ball to Clark in the high post. You could argue Jesse should have been more aggressive defending Clark and got in his face immediately.

Things are not in black and white with zone defense. Defenders need to make informed decisions in a split second based on who is where and what their skill sets are.

It is a complex equation and no one should expect some outsider grading game film to be able to make informed decisions on responsibility for defensive gaffes.

Yes, there will be some things that are obvious, but a lot of the grading is going to be a best guess, where the guessing is not going to be very informed.
umm it says joe is the worst defensive player...which is obvious.

his offense generally seems to make up for it in a big way, though...

however, in games when he is locked up on offense...he is a clear liability, most likely, imo

Joes had some great performances this season...but i can't wait for athletic size in the backcourt to be a thing with this program.
 
For the season, we are 82

They don't show rank changes, but in the past 30 days, our BPR has improved by .5
Thanks for doing that, but I would question the veracity of that metric as a result. + .5 over the last 30 days? Me thinks not
 
I think the numbers for performance on offense have meaning and are worth looking at.

In my opinion, the defensive performance numbers for a zone team have always been and continue to be essentially meaningless.

They say JGIII is our best defensive player. I think Joe has improved defensively over time and is better defensively than many on the board think but he is at best an average defensive player.

Trying to grade players playing the 2-3 on defense is an exercise in futility. You really have to understand the 2-3, you have to understand the abilities of the players on offense and the game plan of the coaching staff to do this right.

And even if you have all that going for you, it is often unclear who is responsible for allowing the offense to score.

Example:

NC State moves the ball around the perimeter on offense, looking for a weakness. Jack Clark moves into the paint near the area where the ACC logo is. The guards do not drop down on him and remain focused on two players on the perimeter, just outside the 3 point circle.

Clark looks up, sees Jesse has moved up slightly but appears to be giving him the 13 footer he now has. He takes a dribble and steps closer to the basket. The guards remain focused on defending the perimeter. Jesse moves toward Clark and presents himself in defensive position.

Clark looks down low, where Brown is positioned to cut off a post to the low post to DJ Burns. He sees Joiner on the opposite side of the paint, covered by Chris Bell.

He sees right again and sees Jarkel Joiner uncovered deep in the right corner. He passes it to him and Joiner takes a relatively unguarded 3.

If he makes it, whose fault is it? I think a lot depends on what the staff has told Brown. The forwards (and the guards) are constantly asked to make choices and the choice they make is based on where players on offense are and what their skill set is.

The staff knows Burns has become a key part of the NC State offense and is lethal when he gets the ball in scoring position down low. They have likely told Brown to focus on Burns when he is posting low on his side, and leave the wing in the corner alone, unless it is Casey Morsell, the top outside shooter on the Wolfpack.

Brown might be doing exactly what he was coached to do, played perfect defense. If Jarkel makes the 3, it is not his fault. You could argue the guards should have made it harder to get the ball to Clark in the high post. You could argue Jesse should have been more aggressive defending Clark and got in his face immediately.

Things are not in black and white with zone defense. Defenders need to make informed decisions in a split second based on who is where and what their skill sets are.

It is a complex equation and no one should expect some outsider grading game film to be able to make informed decisions on responsibility for defensive gaffes.

Yes, there will be some things that are obvious, but a lot of the grading is going to be a best guess, where the guessing is not going to be very informed.
Bayesian analysis here points to Joe as the worst defender (lower number = worse defenive rating)
 
I did. And I couldn’t find, at first try, how the numbers are developed. Just words with no math.
Bayesian analysis is a probabilistic approach to statistical analysis. In brief: one uses a distribution of historical data on the parameter of interest (say, assist %) and then applies this to estimate likelihood of future outcome for the same parameter (usually within a range). The trailing data is called a 'posterior distribution' and is either actually available (which is obviously true for NCAA BB) or estimated by a likelihood model (usually some variation of a Monte Carlo cloud simulation).

Robust and well-established methodology for (1) looking at past data, (2) establishing reasonable ranges a parameter will fall into with high probability, and then (3) assign a probability that for a future event the parameter will fall into a certain range with high probability. So, by comparing a player to others based on past data, adjusting for schedule / opponent strength and other factors, one can use the vast body of box score data to predict a player's performance - and value - going forward against specific opponents and the schedule writ large.

Hope this helps. Stata has good primer on Bayesian analysis.
 
So, MCC I think it's safe to say you have a pretty good understanding of what's going on behind the mathematical curtain on these metrics. Is it fair to say that that the beauty of an analysis like this is that the numbers don't really care whether you are playing zone, man-to-man, or fullcourt pressing, the numbers are what they are? Or, to put it another way, the proof is in the pudding?
 
Bayesian analysis is a probabilistic approach to statistical analysis. In brief: one uses a distribution of historical data on the parameter of interest (say, assist %) and then applies this to estimate likelihood of future outcome for the same parameter (usually within a range). The trailing data is called a 'posterior distribution' and is either actually available (which is obviously true for NCAA BB) or estimated by a likelihood model (usually some variation of a Monte Carlo cloud simulation).

Robust and well-established methodology for (1) looking at past data, (2) establishing reasonable ranges a parameter will fall into with high probability, and then (3) assign a probability that for a future event the parameter will fall into a certain range with high probability. So, by comparing a player to others based on past data, adjusting for schedule / opponent strength and other factors, one can use the vast body of box score data to predict a player's performance - and value - going forward against specific opponents and the schedule writ large.

Hope this helps. Stata has good primer on Bayesian analysis.

Hey buddy. Long time no see. I’ll have to PM you with the latest update. Hope all is well.
 
Bayesian stuff shows Judah as the fifth best player on Cuse and 4th best offensive player. Take it with a heavy grain of salt
The fault lies not in the Bayesian analysis, but in the weighting of various parameters by whoever designed the model. My sense of Miyzawa's BPR is that it 'overweights' missed field goals and may discard too many possessions it deems as 'unhelpful' (e.g, when a game is out of hand). A fairly obvious problem emerges: players on teams that experience lots of blowouts are harder to measure, and players who are asked to do too much - or players who force the action (e.g. freshman guards) - are penalized excessively.

No question that his defensive impact lags; almost all models will rate bigs higher.

In toto, I think the Bayesian readout here is fair.
 
Hey buddy. Long time no see. I’ll have to PM you with the latest update. Hope all is well.
Likewise, hope you're doing well. Please do PM me - I look forward to reading.
 
So, MCC I think it's safe to say you have a pretty good understanding of what's going on behind the mathematical curtain on these metrics. Is it fair to say that that the beauty of an analysis like this is that the numbers don't really care whether you are playing zone, man-to-man, or fullcourt pressing, the numbers are what they are? Or, to put it another way, the proof is in the pudding?
Yes, with one caveat: the low frequency of encountering zones injects some risk into predicting future performance against a zone using the entire body of data. The easy way to counter this: model using only the data against zones. I imagine you'll find enough data to model, and if not: run a Monte Carlo with - I think - some Gibbs sampling (I could be wrong here, it's been a while since I built a model myself). This should yield a 'cloud' of data that minimizes outliers and increases sample size to better 'feed' the predicted range. The Gibbs sampling lets you test each variable / input for fit with the hypothesis you're testing - in essence, asking 'Is this data point valid for us in my against-the-zone D performance prediction?'

Alternatively, you could smooth the general dataset out using a zone-based normalizing algorithm.
 
I know some folks people love to crap on analytics yet SOMEHOW ...you look at the BPR (EvanMiya.com) numbers from Jan 12th vs Feb 16th and it tells you the whole story that anyone who knows what they are watching could tell you.

"Can't start Maliq" - Welp, we start him, somehow we are now better
Bell wasn't playing D. Now it's obvious he's gotten better than the numbers show that. (still not great, but better)
Obvious stuff can be said about Benny, but no need to pile on.

I'd say the one surprise to me is Sy's offensive numbers but there definitely has been games where things move better when he comes in. Less turnovers, better shots. NOT last game, but still overall.

JAN 12th
RANKNAMEOBPRDBPRBPR
1Jesse Edwards2.081.183.27
2Joseph Girard2.130.222.35
3Justin Taylor0.911.202.10
4Maliq Brown1.170.902.06
5Judah Mintz1.590.301.90
6Symir Torrence0.640.551.19
7John Bol Ajak-0.400.900.50
8Benny Williams0.240.220.45
9Chris Bell0.19-0.010.18
10Mounir Hima-0.500.49-0.01

FEB 16th

RANKNameOBPRDBPRBPR
1Jesse Edwards2.251.133.37
2Maliq Brown1.880.862.74
3Joseph Girard2.360.172.53
4Symir Torrence1.640.472.12
5Judah Mintz1.700.322.03
6Justin Taylor1.000.501.51
7Chris Bell0.460.320.78
8John Bol Ajak-0.430.930.51
9Mounir Hima-0.280.660.38
10Benny Williams0.030.290.31
Good info here, thanks for sharing.

One point of yours that I need to quibble with, though. I don’t recall a lot of people arguing strongly in mid-January that we “can’t start Maliq”. My recollection from that time is that most posters were very pleased with Maliq’s progress and were starting to get a little concerned about Benny (to be fair, December was Benny’s best month, so he had earned a little bit of goodwill). But after games like Va Tech (1/11), ND (1/14) and Ga Tech (1/21) - where Benny started but Maliq ended up playing the majority of the game - I saw a bunch of people basically saying ‘who cares who starts’ and/or ‘JB should play whoever is playing better’.

There was definitely more argument regarding Bell vs Taylor, with a bunch of people (myself included) who thought that Bell was unfairly taking the brunt of the blame anytime that SU played poorly. A lot of people were voicing strong opinions that JB absolutely had to bench Bell and start Taylor. There was even a comment made at one point along the lines of ‘anybody who supports Bell starting isn’t serious about winning’, which I thought was a bit over the top. So it’s interesting to see that the team has played better and that Bell has improved - because a lot of people were pretty adamant that this would not happen if JB stuck with Bell As the starter.
 
Yes, with one caveat: the low frequency of encountering zones injects some risk into predicting future performance against a zone using the entire body of data. The easy way to counter this: model using only the data against zones. I imagine you'll find enough data to model, and if not: run a Monte Carlo with - I think - some Gibbs sampling (I could be wrong here, it's been a while since I built a model myself). This should yield a 'cloud' of data that minimizes outliers and increases sample size to better 'feed' the predicted range. The Gibbs sampling lets you test each variable / input for fit with the hypothesis you're testing - in essence, asking 'Is this data point valid for us in my against-the-zone D performance prediction?'

Alternatively, you could smooth the general dataset out using a zone-based normalizing algorithm.

“Computer, give me a zone-based normalizing algorithm.

Enhance.

Enhance.”
 

Forum statistics

Threads
170,644
Messages
4,902,783
Members
6,005
Latest member
CuseCanuck

Online statistics

Members online
179
Guests online
1,967
Total visitors
2,146


...
Top Bottom