Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
It’s amazing how Microsoft can take good models and absolutely ruin them in production… ChatGPT isn’t perfect but it’s like the difference between talking to the wall and talking to an avg IQ person that has reasoning capabilities in many domains that equals or exceeds human performance, if the user knows how to get the best prompt. That changes a little every time they do major model updates though.
I’ve had more intelligent conversations with my own computer running a 3 billion parameter open source model. They must be wasting an incredible amount of money. Especially with GPT-4 considering it produces pretty shit results through Bing Chat…
I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.
Thing is, there wasn’t even a chance of having a full Tay incident. The problem with Tay was that it was a learning model, so people could teach it to be more messed up.
Meanwhile, ChatGPT doesn’t learn, and instead has a preset dataset it knows (hence why it only knows things up to September 2021), so the main reason why it got so heavily censored is more likely to avoid much more minor incidents, which imo is dumb.
It’s amazing how Microsoft can take good models and absolutely ruin them in production… ChatGPT isn’t perfect but it’s like the difference between talking to the wall and talking to an avg IQ person that has reasoning capabilities in many domains that equals or exceeds human performance, if the user knows how to get the best prompt. That changes a little every time they do major model updates though.
I’ve had more intelligent conversations with my own computer running a 3 billion parameter open source model. They must be wasting an incredible amount of money. Especially with GPT-4 considering it produces pretty shit results through Bing Chat…
I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.
Thing is, there wasn’t even a chance of having a full Tay incident. The problem with Tay was that it was a learning model, so people could teach it to be more messed up.
Meanwhile, ChatGPT doesn’t learn, and instead has a preset dataset it knows (hence why it only knows things up to September 2021), so the main reason why it got so heavily censored is more likely to avoid much more minor incidents, which imo is dumb.
Microsoft and ruining things just go hand in hand.