Artificial Intelligence in Sports | Page 3 | Syracusefan.com

Artificial Intelligence in Sports

85% if not more of today’s AI is just algorhythms. The term AI is extremely overused and over labeled.

At the same point, there is a use case to use this information to expedite work. I use it constantly at work to make my emails more clear or help me come up with standardized ways of doing things and keeping standards when I create or design something.

It’s used as a tool, not a replacement. I don’t tell Co-pilot to write an email for me. I use all of my knowledge and explanations and then have copilot help to make it more understandable and to make sure I was able to present everything I wanted to in a grammatically correct way. I use it to vet my ideas and brainstorm configuration and standardization. Make sure that I don’t make an error and check my work. Or help me put my work notes into a standardized template when I finish meeting.

If you’re not using it for these types of tasks, then I’m afraid you will be left behind as the technology continues to boom because you will soon be competing against employees that do understand how to utilize all of their resources to make them more productive.
 
I get what you’re saying, but your response actually kind of proves the opposite point.

You said “it’s not genuine if you used AI,” yet the post you replied to was AI-generated and you responded to it as if it were written by a person. Which means, in practice, you couldn’t tell it was AI.

That’s really the point. If a message is thoughtful, relevant, and written with the recipient in mind, most people won’t know or care what tool helped draft it.

The tool someone uses doesn’t automatically decide that.
Was waiting for the "gotcha I wrote this with AI and you couldn't tell" post. Personally, I will continue to sharpen and use my brain. I've already seen the ill-effects in my colleagues and we're really heading towards Idiocracy (and dead internet theory).

We haven't even discussed the moral and enviromental impact. ChatGPT and others being used for advanced weaponry, Palantir being used to build a surveillance state, Grok poisoning the air and water of Memphis Tennessee. Does that concern you one bit?
 
85% if not more of today’s AI is just algorhythms. The term AI is extremely overused and over labeled.

At the same point, there is a use case to use this information to expedite work. I use it constantly at work to make my emails more clear or help me come up with standardized ways of doing things and keeping standards when I create or design something.

It’s used as a tool, not a replacement. I don’t tell Co-pilot to write an email for me. I use all of my knowledge and explanations and then have copilot help to make it more understandable and to make sure I was able to present everything I wanted to in a grammatically correct way. I use it to vet my ideas and brainstorm configuration and standardization. Make sure that I don’t make an error and check my work. Or help me put my work notes into a standardized template when I finish meeting.

If you’re not using it for these types of tasks, then I’m afraid you will be left behind as the technology continues to boom because you will soon be competing against employees that do understand how to utilize all of their resources to make them more productive.
CU44SE, important decision makers like bnoro will delete your responses without reading them. You are destined to move to the basement with Milton and his Red Swingline Stapler.
 
Was waiting for the "gotcha I wrote this with AI and you couldn't tell" post. Personally, I will continue to sharpen and use my brain. I've already seen the ill-effects in my colleagues and we're really heading towards Idiocracy (and dead internet theory).

We haven't even discussed the moral and enviromental impact. ChatGPT and others being used for advanced weaponry, Palantir being used to build a surveillance state, Grok poisoning the air and water of Memphis Tennessee. Does that concern you one bit?

Ahhh yes of course you knew it was Ai.

You also knew that the last response was Ai too right??? And please, spare me with the “I knew that was Ai too” because if you did you would have said that in that last response.

It’s okay to admit you’re wrong and you can’t tell if it’s Ai. It doesn’t make you a bad person.
 
Ahhh yes of course you knew it was Ai.

You also knew that the last response was Ai too right??? And please, spare me with the “I knew that was Ai too” because if you did you would have said that in that last response.

It’s okay to admit you’re wrong and you can’t tell if it’s Ai. It doesn’t make you a bad person.
You’re a robot.

What have you done with JRHEETER
 
Ahhh yes of course you knew it was Ai.

You also knew that the last response was Ai too right??? And please, spare me with the “I knew that was Ai too” because if you did you would have said that in that last response.

It’s okay to admit you’re wrong and you can’t tell if it’s Ai. It doesn’t make you a bad person.
How do you defeat a superintelligent ai?

Ask it to summarize “The Brothers Karamazov” without using bullet points.
 
Ahhh yes of course you knew it was Ai.

You also knew that the last response was Ai too right??? And please, spare me with the “I knew that was Ai too” because if you did you would have said that in that last response.

It’s okay to admit you’re wrong and you can’t tell if it’s Ai. It doesn’t make you a bad person.
No retort to the moral and enviromental part of the statement? I'm sure you can use AI to tell you it's good and does no harm.

Fresh off the press: https://www.washingtonpost.com/tech...=wp_main&utm_source=bluesky&utm_medium=social

AI's targeting resulted in a hundred children's murder.
 
Last edited:
My Hot Take
AI will start taking over much bigger portions of athletic training and game planning over the next two years.
What do you mean by "athletic training?"
 
No retort to the moral and enviromental part of the statement? I'm sure you can use AI to tell you it's good and does no harm.

Fresh off the press: https://www.washingtonpost.com/tech...=wp_main&utm_source=bluesky&utm_medium=social

AI's targeting resulted in a hundred children's murder.

Ah yes of course the typical deflect.

Bnoro, that wasn’t what this was originally about. I said I have Ai write my emails occasionally. You told me you always ignore said emails if they were Ai.

I proved to you, on multiple posts, that you would have no clue if it was a human email or an Ai written email.
 
No retort to the moral and enviromental part of the statement? I'm sure you can use AI to tell you it's good and does no harm.

Fresh off the press: https://www.washingtonpost.com/tech...=wp_main&utm_source=bluesky&utm_medium=social

AI's targeting resulted in a hundred children's murder.

If you want to talk environmental impact I’m all for it.

As far as that goes, I’m with you. Ai is absolutely horrible on the environment in its current state. No denying it.

But throughout history most technology is pretty bad for the environment until we refine it over and over again and make it better.

These super computers they are using are absolute monsters right now. But they aren’t all that dissimilar from when IBM created the first computer and it basically took up a whole warehouse. Look how far the technology has come now tho. Essentially everyone has a computer right in their pocket.

In 10-15 years these large data centers probably become obsolete. The environmental impacts likely become far less.

It’s why we need guardrails on Ai.

Ai is not going to go away and is only going to enhance. It’s better to embrace it and learn from it and be better.
 
Ah yes of course the typical deflect.

Bnoro, that wasn’t what this was originally about. I said I have Ai write my emails occasionally. You told me you always ignore said emails if they were Ai.

I proved to you, on multiple posts, that you would have no clue if it was a human email or an Ai written email.
I'm asking you if you have any ethical concerns with using a system that is being used to kill people?

Every one of these companies has questionable moral compasses, whether its causing psychosis where people are killing themselves (ChatGPT), being used for war (Claude), being used for genocide (Azure/Copilot) or being used for surveillance (Palantir).

Your email use funds their ability to do these other things whether you want to turn a blind eye to it or not.
 
Ah yes of course the typical deflect.

Bnoro, that wasn’t what this was originally about. I said I have Ai write my emails occasionally. You told me you always ignore said emails if they were Ai.

I proved to you, on multiple posts, that you would have no clue if it was a human email or an Ai written email.
Regardless if you're able to tell if it's AI or not, I would be upset if I found out someone I was talking to was using AI. It brings all sorts of questions, does this person think I'm not worth the time to actually write to me? Does this person not have the ability to convey their thoughts without artificial help?
 
I'm asking you if you have any ethical concerns with using a system that is being used to kill people?

Every one of these companies has questionable moral compasses, whether its causing psychosis where people are killing themselves (ChatGPT), being used for war (Claude), being used for genocide (Azure/Copilot) or being used for surveillance (Palantir).

Your email use funds their ability to do these other things whether you want to turn a blind eye to it or not.
I'm confused. You say that AI, as an output, essentially stinks. And then you attribute suicides, war planning, and mass surveillance to it. I don't think you can have it both ways. The free ChatGPT product doesn't resemble what Anthropic has in weapons systems. If it's a worthless product than how can it commit genocide?
 
I'm asking you if you have any ethical concerns with using a system that is being used to kill people?

Every one of these companies has questionable moral compasses, whether its causing psychosis where people are killing themselves (ChatGPT), being used for war (Claude), being used for genocide (Azure/Copilot) or being used for surveillance (Palantir).

Your email use funds their ability to do these other things whether you want to turn a blind eye to it or not.

Okay but you also use the internet and that does the same thing? I mean do you think these places are not using the internet to get surveillance and do the same thing?

It seems like your argument is all over the place because I stumped you with not knowing if something was Ai or not.

Again, this whole thing started because there are times when I need to write an email and you told me you would ignore an Ai email and I proved you would not be able to tell if it was Ai or not.
 
AI can do some amazing things, but if it takes away a million jobs not sure that helps the value

Its very costly right now handling the low hanging fruit which most people are using it for.

People use it to write code. But the hardest thing to do is fix code that you didn't actually write. So now you have huge swaths of code installed that have no real backup. Its not even clear if you used AI to write it and asked the same question the next day if you would get the same code. Maybe the code has it 90% right but you dont know until you hit that edge case what it does and often if you are not writing the code you dont know how to really test those edge cases. Now at some point do you have to use two different AIs to get more point of views to make sure you didnt miss something? Some large companies have used AI type tools for a long and found them really lacking and also quite scary. Sure it probably improves but it also takes you down a deep hole relying on them and if they are not there what do you do?

So you use Claude for 2-3 yrs and Claude financially cant make it work. You are even in more of a hole since you dont have the foundation behind what you even did. It also makes you wonder from the liability side how you handle mistakes it caused you to make.
 
Last edited:
I'm asking you if you have any ethical concerns with using a system that is being used to kill people?

Every one of these companies has questionable moral compasses, whether its causing psychosis where people are killing themselves (ChatGPT), being used for war (Claude), being used for genocide (Azure/Copilot) or being used for surveillance (Palantir).

Your email use funds their ability to do these other things whether you want to turn a blind eye to it or not.
And some people thought Ai was just
Screenshot_20260305_090306_Chrome.jpg
a really dumb jock out of Georgetown. Who knew he was James Bond's worst nightmare.
 
Okay but you also use the internet and that does the same thing? I mean do you think these places are not using the internet to get surveillance and do the same thing?

It seems like your argument is all over the place because I stumped you with not knowing if something was Ai or not.

Again, this whole thing started because there are times when I need to write an email and you told me you would ignore an Ai email and I proved you would not be able to tell if it was Ai or not.
Yeah you got me!

This whole thing started because I said I will not use AI, and I don't respect people who use AI to write emails. The reason I don't is because of the ethical and environmental impacts of this.

If a few people in this thread decide to stop using it because of what I've posted, I'm happy. If you're still lazy enough to ignore those things to have AI type a 3 sentence response or an email, that's on you.
 
I'm confused. You say that AI, as an output, essentially stinks. And then you attribute suicides, war planning, and mass surveillance to it. I don't think you can have it both ways. The free ChatGPT product doesn't resemble what Anthropic has in weapons systems. If it's a worthless product than how can it commit genocide?

They are using these systems without considering the errors it can make.

You can read an article a day about these mistakes.
 
Yeah you got me!

This whole thing started because I said I will not use AI, and I don't respect people who use AI to write emails. The reason I don't is because of the ethical and environmental impacts of this.

If a few people in this thread decide to stop using it because of what I've posted, I'm happy. If you're still lazy enough to ignore those things to have AI type a 3 sentence response or an email, that's on you.
Everyone believes in something. More power to you and your beliefs. But it's tough when people start bringing up the environment. People need food to survive. 99.99% of food is probably delivered to supermarkets in trucks. Gas powers trucks. People want electric vehicle. Electric vehicles affect the power grid. You can't have everything. It doesn't work like that. With the good comes the bad. It might not be the greatest example, but are you going to change things up and not have technology available to use.

If you want the best technology, and we do want it keep advancing, you have to understand that others are going to use that same technology in the way they want. It's unfortunate, but it is.
 

They are using these systems without considering the errors it can make.

You can read an article a day about these mistakes.
I understand there are mistakes and errors in AI output. Needless to say, there are mistakes and errors in human output as well.

I'm not an AI advocate. I'm fully and thoroughly convinced that it is, and will be, a gigantic, irreversible net negative for our society.

But you seem to be suggesting that AI is dumb and will always be dumb. I think AI is much smarter than you think, right now, and will be unfathomably smarter in a year, and 2 years, and 10 years from now. Then we're really screwed.
 
It is good for when you have some info you need to extract from a large data set of some kind. Will definitely have/already has uses helping lawyers find relevant case law. It can spit out generic documents like grants and financial reports and things like that. Anything generic that involves plugging info into what is essentially a template, I would have no problem using it, and people were using pre-AI stuff that is a little slower for essentially the same task. Notes from a meeting? Sure.

But man, writing grammatically correct emails and stuff like that... I feel like you just gotta keep that brain turned on and do it yourself. Maybe I would consider setting up the AI equivalent of an auto-reply to certain types of emails, where it is always going to be the same generic response, but anything where you have to be diplomatic, tactful, careful in your choice of words, that is a skill! That is a muscle that needs to be trained and kept in shape! It is scary to think of people using AI to navigate tricky human interactions, whether that is in or out of the workplace. Those people are losing important skills built up over a lifetime. Don't outsource your own thought people! If someone is debating something with you online here or wherever, start working that noggin and think of a response! Making a habit of using AI for that kind of thing will make you dumb as a stump. And if anyone whips out AI in an in person discussion I will bully them ruthlessly!
 

Forum statistics

Threads
175,313
Messages
5,348,238
Members
6,233
Latest member
SUtoga

Online statistics

Members online
54
Guests online
6,302
Total visitors
6,356


Top Bottom