AI to asses authenticity of a watch

Posts
10,586
Likes
51,511
I’m gonna guess others have tried this but I know someone with a rep that used open AI to check for authenticity and open AI was able to point out numerous aspects of why it’s fake. I had to download GROK for a study I was accessing at work regarding a surgery one of my clients was undergoing. I can’t remember what the hec it was but it did provide the needed info.

So I attempted to have it authenticate my Rolex. It spit out an impressive list of things to look at and compared each one to my image. Actually quite in depth but I think it made an error reading it as fluted. I can’t explain that.

I couldn’t copy and paste the entire schpiel it was too long. Perhaps it may be beneficial for rudimentary checks.

Interesting it picked out a rep and leaned towards mine being authentic. I admit two examples doesn’t tell much but it is interesting to ponder.
 
Posts
4,437
Likes
44,297
I hope you won't rely on and AI for authentication it really just sucks as is proved by it's response, picking up Rolex marketing and using it as fact or feature when it's just spin for marketing purposes.
 
Posts
10,586
Likes
51,511
I hope you won't rely on and AI for authentication it really just sucks as is proved by it's response, picking up Rolex marketing and using it as fact or feature when it's just spin for marketing purposes.
So you’re saying it reads like an ad? Honestly the entire comparison would have included 4 or more screen shots of text of comparisons between fake and real. I just posted a screen shot of the last summary. It really went in depth. I’m not proposing anyone use it as a final say in anything just found it interesting it was able to distinguish a rep.
 
Posts
23,026
Likes
51,476
LLMs are terrible at image analysis. The text you printed contains an obvious hallucination, suggesting that the whole thing is probably either garbage or sometimes accidentally correct.

The model probably can give you good descriptions of an authentic Rolex, but can't tell if the photos exhibit those features. I uploaded a photo of an Omega to ChatGPT once, and it said that it was an authentic Rolex, even though the Omega symbol and logo were readily visible.

Ultimately there will be specialized AI solutions for authentication using images, but they will not be LLMs. Entrupy is an example for handbags and sneakers, I have no idea how good it is.
Edited:
 
Posts
10,586
Likes
51,511
LLMs are terrible at image analysis. The text you printed contains an obvious hallucination, suggesting that the whole thing is probably either garbage or sometimes accidentally correct.

The model probably can give you good descriptions of an authentic Rolex, but can't tell if the photos exhibit those features. I uploaded a photo of an Omega to ChatGPT once, and it said that it was an authentic Rolex, even though the Omega symbol and logo were readily visible.

Ultimately there will be specialized AI solutions for authentication using images, but they will not be LLMs. Entrupy is an example for handbags and sneakers, I have no idea how good it is.
Could just be occasionally correct I only had two examples but it is something that may be helpful as AI continues to improve. All the money going into it one would think it will improve in the future perhaps in ways we can’t yet comprehend or maybe it’ll fizzle out , I’m leaning towards improving in ways we can’t yet comprehend but that’s not worth much just an opinion
 
Posts
964
Likes
1,723

AI to asses authenticity of a watch​

Hey ChatGPT! Do our asses look big in these?
🤣😉
 
Posts
10,586
Likes
51,511

AI to asses authenticity of a watch​

Hey ChatGPT! Do our asses look big in these?
🤣😉
That was on purpose thanks for noticing. And I did have chat gpt misread a watch when I was messing around with it once but that was a year or so ago in general I have little use for it.
 
Posts
2,334
Likes
3,024
I tested a few LLM's and they mostly struggled to even tell the difference between a MoonSwatch and Speedmaster, while providing convincing-sounding BS replies about the authenticity and condition. Impressive sounding output, while having no real clue.
 
Posts
33,019
Likes
37,797
I mentioned this in another thread but we've had people rely on it when ordering a bracelet, only to be told to buy the wrong size (21mm vs 20mm) as it doesn't know the difference between models, and a guy thought his vintage early Seamaster he inherited was fully water resistant because AI told him all Seamasters were.

I'm sure it leads many more people astray as the nuances of vintage and modern watches are quite subtle and it's so easy to misread something minor that makes a big difference.

The way I tend to use it is I ask something like what are the lug bolt torque specs on a BMW E46, and provide sources to confirm the information, then I verify the sources linked to make sure its correct. That's the only safe way to be doing any actual decision making really to use it as a search engine of sorts and check its sources every time.
 
Posts
23,026
Likes
51,476
I tested a few LLM's and they mostly struggled to even tell the difference between a MoonSwatch and Speedmaster, while providing convincing-sounding BS replies about the authenticity and condition. Impressive sounding output, while having no real clue.
For a giggle, next time you get a BS reply, respond by telling the LLM that it's incorrect and explain why. It will apologize and give you another BS reply. You can do this indefinitely by uploading a photo of a movement, because the LLMs have absolutely no clue about them.
 
Posts
4,437
Likes
44,297
My experience with AI to this point has been bad, particularly when researching subjects that are well known to me with AI putting up ridiculously incorrect responses in amongst the correct ones. That is fine when you know the subject and can sort the wheat from the chaff so to speak but many ( most ) people using it have no clue, that is why after all, they are using it to gain knowledge or insight into little or unknown subjects.
The long term effect of this is that individuals or organisations that used AI to source their data then further promulgate the false information online making the data sets that AI selects for its responses even more incorrect due to the online reinforcement of incorrect assumptions.
 
Posts
2,334
Likes
3,024
My experience with AI to this point has been bad, particularly when researching subjects that are well known to me with AI putting up ridiculously incorrect responses in amongst the correct ones. That is fine when you know the subject and can sort the wheat from the chaff so to speak but many ( most ) people using it have no clue, that is why after all, they are using it to gain knowledge or insight into little or unknown subjects.
The long term effect of this is that individuals or organisations that used AI to source their data then further promulgate the false information online making the data sets that AI selects for its responses even more incorrect due to the online reinforcement of incorrect assumptions.
My experience has been that LLM's are best at anything transformation related. Give it existing known-working code and a well defined change request, or maybe port a Bash script to Go, translate Chinese to English, etc. It can do really well at these sort of tasks. I've learned a lot of new stuff from these models, but always in areas where I had enough base knowledge to pick out obvious slop.
 
Posts
23,026
Likes
51,476
Not as profound, but LLMs are great at generating polished text for routine tasks, where it has presumably been trained on many good examples. For example, I asked ChatGPT to draft a vision statement for a certain type of enterprise, and it gave me an excellent first draft, presumably drawn from other statements found on the internet. As expected, it's also good at drafting press releases, job descriptions, etc.

It doesn't write good jokes.
 
Posts
2,334
Likes
3,024
Not as profound, but LLMs are great at generating polished text for routine tasks, where it has presumably been trained on many good examples. For example, I asked ChatGPT to draft a vision statement for a certain type of enterprise, and it gave me an excellent first draft, presumably drawn from other statements found on the internet. As expected, it's also good at drafting press releases, job descriptions, etc.

It doesn't write good jokes.
Depends on the model and instructions. It'll put in an effort, but yields some pretty weird results sometimes...
Edited:
 
Posts
18,041
Likes
27,348
LLM’s are horrible at images. You need a trained neural network to asses watches.

LLM’s are a trap for lots of different types of information.

There are then a few examples I’ve run across where all of the LLM’s will confidently give me 3 different wrong answers to a question worded only slightly differently. They are all steadily getting worse as they ingest other AI created answers and train on those.
 
Posts
2,643
Likes
4,218
perhaps in ways we can’t yet comprehend or maybe it’ll fizzle out ,
I am leaning to fizzle out. Too much hype and no substance. I still think 'AI' is like a wax model. Detailed outside nothing inside.

Have been working with image processing for nearly 50 years. A lot of it is the statistics what underlie AI. Graphics use a matrix multiplication for transforms. Everything is vectors. I think LLMs are using these vectors which are used to compress images and audio, to process language grammar. Perhaps I am mixing neural nets (which I sometimes work with) and LLMs I thought LLMs were large NNs.

I think a lot of people are asking chat-GPT to predict when the AI bubble will burst. That could create some rather interesting feedback.

Shakespeare mentioned black swans in Romeo and Juliet. I think they had to remove one recently from the pond at Stratford on Avon. Not sure stockbrokers read Shakespeare.

Note who else clicked on the continue reading button in the OP graphic?
 
Posts
77
Likes
142
My experience with AI to this point has been bad, particularly when researching subjects that are well known to me with AI putting up ridiculously incorrect responses in amongst the correct ones. That is fine when you know the subject and can sort the wheat from the chaff so to speak but many ( most ) people using it have no clue, that is why after all, they are using it to gain knowledge or insight into little or unknown subjects.
The long term effect of this is that individuals or organisations that used AI to source their data then further promulgate the false information online making the data sets that AI selects for its responses even more incorrect due to the online reinforcement of incorrect assumptions.
This.

This is already the norm for many people and organizations. They take AI information as the ultimat truth and false and absolutely ridiculous claims are floating around as facts.
Even more worrying is that AI is doctored to be a political tool.
 
Posts
10,586
Likes
51,511
It seems Metas Llama 3.2 was made to enhance LLM ability to work with images. I’m not downloading or interacting any more of these things to test them out as I have little use for them. They are basically a toy for me with the exception of getting information about one study I couldn’t find and getting information about it.

As far as AI fizzling out anything is possible but nividia became a 1 trillion dollar company in 2023. I was looking at its market cap Wednesday and Thursday prior to the most recent trade heat up with China it was at 4.7 trillion with a 53 P/E and they had just put another 100 billion into open AI. Either way it will be interesting to see how it plays out.
 
Posts
23,026
Likes
51,476
It seems Metas Llama 3.2 was made to enhance LLM ability to work with images. I’m not downloading or interacting any more of these things to test them out as I have little use for them. They are basically a toy for me with the exception of getting information about one study I couldn’t find and getting information about it.

As far as AI fizzling out anything is possible but nividia became a 1 trillion dollar company in 2023. I was looking at its market cap Wednesday and Thursday prior to the most recent trade heat up with China it was at 4.7 trillion with a 53 P/E and they had just put another 100 billion into open AI. Either way it will be interesting to see how it plays out.
Some LLMs are pretty good at editing images, but they are fundamentally not the type of AI you want to use to analyze images for purposes like authentication. As noted several times above, the standard convolutional neural network approach is much better. The only reason to attempt to use LLMs is because they are sitting there, free to use. But LLMs will be right and wrong arbitrarily, which is not helpful for authentication.
 
Posts
10,586
Likes
51,511
Apparently grok thinks it has a decent success rating on spotting fakes. Although again it always recommends using further assessment to verify.

I am neither pro nor con just observing the scene. I would be interested in a version designed specifically to assist in watches I don’t know if I’d ever have 100% faith in one but it may be cool when your at a shop come across something unique. I just find it of interest especially after watching a show on how AI is helping in cancer research it was rather enlightening to me.

I’m not defending the AI’s claim on accuracy, you can take it up with AI.