OpenAI News & Discussions

User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

Image
User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »





User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

User avatar
Ozzie guy
Posts: 508
Joined: Sun May 16, 2021 4:40 pm

Re: OpenAI News & Discussions

Post by Ozzie guy »

Regarding the last reddit comment Wjfox shared. I wonder if instead of effective altruism, AI is blamed when OpenAi's collapse damages business.

Sounds like it could drop confidence in AI and be like a .com bubble.
User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »




User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

OpenAI: Microsoft wants changes after Sam Altman debacle

1 hour ago

Microsoft boss Satya Nadella has called for changes to OpenAI's board after its shock firing of Sam Altman.

He told CNBC "something has to change" at the firm, but did not specify what that was, or rule out the tech giant getting a seat on the board.

Microsoft is OpenAI's biggest investor by far, and has now also recruited Mr Altman to lead a new artificial intelligence (AI) team.

But its strong links to OpenAI do not currently extend to its boardroom.

"At this point, I think it's very, very clear that something has to change around the governance," said Mr Nadella. "We'll have a good dialogue with their board on that."

The Microsoft chief executive's calm demeanour in a round of media interviews is in contrast to the tumult at OpenAI itself, where staff are in open revolt at Mr Altman's departure.

They have demanded he returns and the board is fired - but exactly what is happening with the company's former chief executive is still unclear.

https://www.bbc.co.uk/news/technology-67484455
User avatar
raklian
Posts: 1829
Joined: Sun May 16, 2021 4:46 pm
Location: North Carolina

Re: OpenAI News & Discussions

Post by raklian »

This can't be good for some of the OpenAI board members. I'm not sure they'll still be around after this saga is over.
To know is essentially the same as not knowing. The only thing that occurs is the rearrangement of atoms in your brain.
User avatar
Cyber_Rebel
Posts: 435
Joined: Sat Aug 14, 2021 10:59 pm
Location: New Dystopios

Re: OpenAI News & Discussions

Post by Cyber_Rebel »

Ozzie guy wrote: Tue Nov 21, 2023 10:09 am Regarding the last reddit comment Wjfox shared. I wonder if instead of effective altruism, AI is blamed when OpenAi's collapse damages business.

Sounds like it could drop confidence in AI and be like a .com bubble.
It's, kind of hard to spin a reason as to why though. AI itself would have had some reason for OAI's debacle, and we all know as of now that it's "moralistic" human error. It would be one thing if this was all about curated data which was used to train the models (lawsuits) or some kind of use case which ended up being dangerous, but it's a whole other when a few people damage the lives of many people due to their own worldview.

Have to consider the reaction to all of this, too. Meta literally got rid of its safety team right after this happened and that was no coincidence. The capitalist looking at this just see a bunch of incompetent people who are making a technology they've invested in more volatile than it needs to be.

Perhaps one of the best outcomes in all of this will be the fallout against unreasonable "safety" alignment. I mean, this is who we're dealing with:



--------------------------------------------------------------------

Does this sound like something a reasonable person would say? Effective Altruism which seems to be nothing more than postulations on worse case scenarios is not good for business nor for scientific advancement. They also unironically remind me of that radical Romulan group in Star Trek who tried to screw over AI with oddly similar reasoning.
firestar464
Posts: 1910
Joined: Wed Oct 12, 2022 7:45 am

Re: OpenAI News & Discussions

Post by firestar464 »

https://en.wikipedia.org/wiki/Effective_altruism

Controversial, but I'm kind of tired of people just mindlessly joining this "ea bad" circlejerk without trying to figure out what it is, and if the people they are pissed with actually line up with what the philosophy really is
User avatar
erowind
Posts: 556
Joined: Mon May 17, 2021 5:42 am

Re: OpenAI News & Discussions

Post by erowind »

Cyber_Rebel wrote: Mon Nov 20, 2023 4:05 am May the AGI be with you... The more I hear about the details surrounding this, the stranger it gets. 2020s for you.

This is the most silicon valley thing I’ve ever read I think. Who knew HBO’s series was a documentary not a satire.
User avatar
Ozzie guy
Posts: 508
Joined: Sun May 16, 2021 4:40 pm

Re: OpenAI News & Discussions

Post by Ozzie guy »

Former employees come out against Sam Altman claiming this was Sam's behaviour. I will share full text here presuming some without a twitter account may not be able to read the whole thing.




"To the Board of Directors of OpenAI:

We are former OpenAI employees who left the company during a period of significant turmoil and upheaval. As you have now witnessed what happens when you dare stand up to Sam Altman, perhaps you can understand why so many of us have remained silent for fear of repercussions. We can no longer stand by silent.

We believe that a significant number of OpenAI employees were pushed out of the company to facilitate its transition to a for-profit model. This is evidenced by the fact that OpenAI's employee attrition rate between January 2018 and July 2020 was in the order of 50%.

Throughout our time at OpenAI, we witnessed a disturbing pattern of deceit and manipulation by Sam Altman and Greg Brockman, driven by their insatiable pursuit of achieving artificial general intelligence (AGI). Their methods, however, have raised serious doubts about their true intentions and the extent to which they genuinely prioritize the benefit of all humanity.

Many of us, initially hopeful about OpenAI's mission, chose to give Sam and Greg the benefit of the doubt. However, as their actions became increasingly concerning, those who dared to voice their concerns were silenced or pushed out. This systematic silencing of dissent created an environment of fear and intimidation, effectively stifling any meaningful discussion about the ethical implications of OpenAI's work.

We provide concrete examples of Sam and Greg's dishonesty & manipulation including:

- Sam's demand for researchers to delay reporting progress on specific "secret" research initiatives, which were later dismantled for failing to deliver sufficient results quickly enough. Those who questioned this practice were dismissed as "bad culture fits" and even terminated, some just before Thanksgiving 2019.

- Greg's use of discriminatory language against a gender-transitioning team member. Despite many promises to address this issue, no meaningful action was taken, except for Greg simply avoiding all communication with the affected individual, effectively creating a hostile work environment. This team member was eventually terminated for alleged under-performance.

- Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.

- Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.

- The Operations team's tacit acceptance of the special rules that applied to Greg, navigating intricate requirements to avoid being blacklisted.

- Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.

- Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.

Despite the mounting evidence of Sam and Greg's transgressions, those who remain at OpenAI continue to blindly follow their leadership, even at significant personal cost. This unwavering loyalty stems from a combination of fear of retribution and the allure of potential financial gains through OpenAI's profit participation units.

The governance structure of OpenAI, specifically designed by Sam and Greg, deliberately isolates employees from overseeing the for-profit operations, precisely due to their inherent conflicts of interest. This opaque structure enables Sam and Greg to operate with impunity, shielded from accountability."
firestar464
Posts: 1910
Joined: Wed Oct 12, 2022 7:45 am

Re: OpenAI News & Discussions

Post by firestar464 »

Isn't Sam a (mostly) reasonable chap? I hate his web3 stuff but he's otherwise fine.

Really, these claims are exceptional in many respects. At this point I've just settled for "I don't know shit"
Solaris
Posts: 14
Joined: Thu Sep 22, 2022 8:21 pm

Re: OpenAI News & Discussions

Post by Solaris »

I don't really believe in the rumours surrounding the firing of Altman, and they are very different in nature depending on who tells the story. It's highly unlikely that they are all 10+ people of board members that are pro AI safety guys. They are also not idiots. Like Oppenheimer, they know that if they don't create anything the next 10 years, someone else will, so this reasoning about halting AI progress is dumb speculation in my eyes. We do know what is likely. From economics, we know that board members are risk-averse utility maximisers. Their utility is usually a function of the profit function of investors, meaning their goal is to maximise profits to maximise their own utility. The firing of Altman is very likely a disput of the future of OAI, which will results in different profits streams outcomes. Sam has very likely envisioned a future for OAI, where profits would be lower, probably due to AI alignment focused mindset for future models. I would be really surprised if the reason were anything other than that. It would disrupt human nature and the way capitalism works.
firestar464
Posts: 1910
Joined: Wed Oct 12, 2022 7:45 am

Re: OpenAI News & Discussions

Post by firestar464 »

firestar464 wrote: Wed Nov 22, 2023 12:27 am Isn't Sam a (mostly) reasonable chap? I hate his web3 stuff but he's otherwise fine.

Really, these claims are exceptional in many respects. At this point I've just settled for "I don't know shit"
Also let's not forget that Elon's amplifying this purported letter- while being one of the most virulent transphobes to exist. He obviously has an agenda, and not a moral one
weatheriscool
Posts: 16334
Joined: Sun May 16, 2021 6:16 pm

Re: OpenAI News & Discussions

Post by weatheriscool »

spryfusion
Posts: 627
Joined: Thu Aug 19, 2021 4:29 am

Re: OpenAI News & Discussions

Post by spryfusion »



User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »

Image
Credit: SuhailKakar
firestar464
Posts: 1910
Joined: Wed Oct 12, 2022 7:45 am

Re: OpenAI News & Discussions

Post by firestar464 »

Apparently Adam is forgiven. Interesting.

Here's a dumb conspiracy theory for shits and giggles:

Adam votes to fire his friend to boost his friend's popularity and boost MS stock. Lots of holes in this, but it's just a joke lol
User avatar
wjfox
Site Admin
Posts: 10123
Joined: Sat May 15, 2021 6:09 pm
Location: London, UK
Contact:

Re: OpenAI News & Discussions

Post by wjfox »




User avatar
Powers
Posts: 1055
Joined: Fri Apr 07, 2023 7:32 pm
Location: a.k.a Lurking, Member, Lorem Ipsum, ..., --- and ººº.

Re: OpenAI News & Discussions

Post by Powers »

Post Reply