Oh, so Hertz has gotten wise to… every online platform that exists: Outsourcing all responsibility for their user-hostile bullshit to some vague “system” that cannot be held accountable.
I’m so sorry but the advertised cost has doubled because… Computer says so! No, sir, there’s nothing I can do, sir, you see it’s the system.
And you can’t go anywhere else, because everyone else is doing it (or soon will be) too!
I think it’s generally a brilliant solution but there are a couple of problems here:
- The scanner seems to flag fucking everything and charge for minor damage where a human would probably flag it as wear.
- No one is allowed to correct the scanner:
Perturbed by the apparent mistake, the user tried to speak to employees and managers at the Hertz counter, but none were able to help, and all “pointed fingers at the ‘AI scanner.’” They were told to contact customer support — but even that proved futile after representatives claimed they “can’t do anything.”
Sounds to me like they’re just trying to replace those employees. That’s why they won’t let them interfere.
I’m not sure how you can make the points you make, and still call it a “generally brilliant solution”
The entire point of this system - like anything a giant company like Hertz does - is not to be fair to the customer. The point is to screw the customer over to make money.
Not allowing human employees to challenge the incorrect AI decision is very intentional, because it defers your complaint to a later time when you have to phone customer support.
This means you no longer have the persuasion power of being there in person at the time of the assessment, with the car still there too, and means you have to muster the time and effort to call customer services - which they are hoping you won’t bother doing. Even if you do call, CS hold all the cards at that point and can easily swerve you over the phone.
It’s all part of the business strategy.
I’m not sure how you can make the points you make, and still call it a “generally brilliant solution”
Because the technology itself is not the problem, it’s the application. Not complicated.
The technology is literally the problem as it’s not working
There’s literally nothing wrong with the technology. The problem is the application.
The technology is NOT DOING WHAT ITS MEANT TO DO - it is IDENTIFYING DAMAGE WHERE THERE IS NONE - the TECHNOLOGY is NOT working as it should
The technology isn’t there to accurately assess damage. It’s there to give Hertz an excuse to charge you extra money. It’s working exactly as the ghouls in the C-suite like.
Just because THE TECHNOLOGY IS NOT PERFECT does not mean it is NOT DOING WHAT IT’S intended to do. Sorry I’m having trouble controlling THE VOLUME OF MY VOICE.
There’s literally nothing wrong with the technology.
Pick a lane troll
It works as Hertz intended. And that’s the problem.
It’s really funny here. There already exists software that does this stuff. It’s existed for quite a while. I personally know a software engineer that works at a company that creates this stuff. It’s sold to insurance companies. Hertz version must just totally suck.
It’s designed to suck.
You are spot on here. AI is great for sensitivity (noticing potential issues), but terrible for specivity (giving many false positives).
The issue is how AI is used, not the AI itself. They don’t have a human in the checking process. They should use AI scanner to check the car. If it’s fine, then you have saved the employee from manually checking, which is a time-consuming process and prone to error.
If the AI spots something, then get an employee to look at the issues highlighted. If it’s just a water drop or other false positive, then it should be a one click ‘ignore’, and the customer goes on their way without charge. If it is genuine, then show the evidence to the customer and discuss charges in person. Company still saves time over a manual check and has much improved accuracy and evidence collection.
They are being greedy by trying to eliminate the employee altogether. This probably doesn’t actually save any money, if anything it costs more in dealing with complaints, not to mention the loss of sales due to building a poor image.
AI is great for sensitivity (noticing potential issues), but terrible for specivity (giving many false positives).
AI is not uniqely prone to false positives; in this case, it’s being used deliberately to produce them.
Sounds like they want to lose those customers.
Companies have been fucking consumers since the beginning of time and consumers, time and time again, bend over and ask for more. Just look at all of the most successful companies in the world and ask yourself, are they constantly trying to deliver the most amazing service possible for their customers or are they trying to find new ways to fuck them at every available opportunity?
Okay so…in the rare event I need to rent a car, any suggestions on who to use that isn’t Hertz and sister companies?
I wonder what a credit card dispute would result in here. Underutilized feature when businesses pull shady shit. Think I’ve had 6 or so disputes over the years, never failed.
Too many people these days don’t use or have access to credit cards for services like this. Many people I know only use bank debit cards, or worse, use the debit preloaded cash cards issued by their employers’ payroll service provider.
Credit cards motivate banks to help you, because if you won’t pay, and the business doesn’t pay, the bank has to take the hit.
Debit cards will work as well if your bank values it’s reputation - but not all banks do.
And I would not trust a preloaded card provider to assist. You are neither their business partner nor their customer and that puts your interests at the bottom of a very long list. You have to hope some law is on your side or that your issue is so trivial that resolving it is more cost effective then dealing with you.
Credit cards are also an instrument of christofascist pedophiles who want to ban all pornography and ‘pornography’ (they consider the existence of queer people to be porn)
Sounds like that shit with dodgy smoking detection in a hotel from last week…
Yup intentionally using dogy tools to extract more money from people under false pretenses, at this point I’m boycotting any company that claims to use AI, fuck em all
Good luck trying to boycott a car rental company, as far as I can tell they are all actually the same company with 5 different “brands”. You rent from one but when you show up they send you to another one who has the car. It’s crazy.
I’d ask for the stupid AI scanning system to scan my car before I agree to renting it. Once they sign off on the ‘all clear’ notification from their AI scanner before rental, then I’d consider renting it … but after reading this headline, I’d probably just tell them, I’m spending a few hundred dollars more on renting a car from someone else.
Just spit balling here, but they probably tune the AI for different thresholds between return and rent out so that they can rake in the damage fees for things that “weren’t there” during the first AI scan.
You mean an LLM that doesn’t have the ability to understand context fails to make decisions that require context to do properly? Shocking /s
Except they are using computer vision, not an LLM
And what is processing that information?
Computer vision commonly uses convolutional neural networks on the input, which is different from the transformer neural networks used in LLMs. If you have more info indicating LLMs are used here please share
If you have more info indicating LLMs are used here please share
two seconds of research would reveal LLMs are ALL OVER COMPUTER VISION. Are convolutional networks used? Yes. Are LLMs used? Yes. And MLLMs.
Tell you what sparky: you find me a source that says ONLY CNNs are used, then you can act like a subject matter expert.
https://arxiv.org/abs/2311.16673
https://github.com/OpenGVLab/VisionLLM
https://www.chooch.com/blog/how-to-integrate-large-language-models-with-computer-vision/
I was actually referring to UVEye which was referenced in the article. I looked into UVEye and nowhere did it say it used LLMs with their computer vision. That’s why I asked if anyone had any info on them using it. The comment I replied to assumed LLMs were used but supplied no evidence. None of the links you shared have anything to do with UVEye either.
Computer vision commonly uses convolutional neural networks on the input,
no where do you specify UVEye.
You could admit they’re all over, but instead double down on how I assumed lol
Except they are using computer vision, not an LLM
That’s what I initially said, referring to the article. If you have nothing to say regarding the technology in this article that’s fine, but don’t just assume that since there is research of incorporating LLMs into computer vision means it was used in this specific case.
what do you think is driving the image recognition take that comes from the computer vision hardware?
it’s an LLM.
https://www.chooch.com/blog/how-to-integrate-large-language-models-with-computer-vision/