-1.6 C
New York
Wednesday, February 1, 2023

Google's Secret New Project Teaches AI To Write and Fix Code – Slashdot

Become a fan of Slashdot on Facebook




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Seems to me that’s a relatively obvious approach. And it’s NOTHING NEW. The MIT AI Lab was talking about “cliches” A Long Time before the Patterns book came out, in the late ’70s/early ’80s. And much of their work went into recognizing the situations where a given “cliche” could be applied. https://dspace.mit.edu/handle/… [mit.edu] Note in this paper’s abstract the key realization that these were not complete programs, but rather applied within the context of a program. (Thus a wave of the middle finger at those who claimed that all you needed for good code was a Patterns catalog, or those who claimed that “architecture is all about patterns.”)
(p.s. to the Anonymous Coward who said, “Get a new meme, boomer”, I’d respond, “Find something to talk about that boomers haven’t already seen.”)
This Google project will be carried to the Google Graveyard within 2~3 years because the main project manager is sacked due to Google’s new stack ranking job evaluation.
True. Especially the manager that named the project “Pitchfork”. You know what pitchforks are good for? Shoveling shit. That was the first thing that came to mind when I read the name. Might as well call the project “Roto-rooter”.
Very interesting – thanks! That may be further along than I thought. Multi-round especially, seems like a good idea.
I guess the problem will always remain, will humans tell the system what they really want to build just like today you can get specs that are not thought out fully… but an AI developing it could allow for faster iteration of ideas perhaps.
to find security problems in existing in code. They will then add to their arsenal for cracking systems abroad and at home.
Let’s hope that the gang at google learnt anything from the GitHub copilot fiasco, and trained thier models using only code with MIT/BSD/APACHE type licenses, so that the AI is not marred by a damocles sword of copywright infringement
Are ai an application? A service? An environment? An architecture? Can they read and track the source code on a system they are running on? Identify bugs or bad code? I am thinking that if ANY of these are a solid yes lawsuits like the copilot one are going to be shot down.
There’s a lawsuit, but I don’t expect it to get any where.

Perhaps there should be a license that allows unrestricted code use, but only by humans.

Perhaps there should be a license that allows unrestricted code use, but only by humans.
How would the code be compiled without a computer seeing it?
Didn’t we just read yesterday that “Kite” closed shop after ten years because this stuff doesn’t work and isn’t close to working?
Once software is actually running it is very hard(impossible?) to have the exact same purpose, output without the original software in its current state being referenced. Legally that is.
Not another one. Where do I buy stock puts or shorts against them? This time *I* want to profit from morons & suckers instead of just the fat cats.
Applications seem to write themselves there…..
It means that AI can’t write or fix all possible programs. It is of course possible to write an AI to fix a certain kind of bug, or generate some limited subset of programs. Say you could have an AI that spits out the source code for calculators for any specified base, like hexadecimal, oct, or base 42 if you like. And if you put more work into your AI, you can have it generate more complex things. The incompleteness theorem states that no matter how complex you make your AI, there will always be some progr
Brought to you by the company that doesn’t bother to even document half their work.
because all code has bugs.
It’s only code fragments from (possibly) working programs that do ‘something’ that might or might not be applicable. It’s not like they’re trying to invent a perpetual motion machine or some other impossible task. Not like the halting problem at all. Code fragments are not equivalent to full programs. Completely different.
The lack of being able to describe how an AI process determines it’s result is a feat
Code fragments cannot be evaluated without knowing the purpose of the code surrounding it and even looking at a trace or memory dump only implies that the libraries are doing what they are tagged as doing. A person has to at some point go line by line to know what is really supposed to be happening. So you can call not knowing what a given segment of code is actually doing without testing, patching, retesting,….a convenient universal feature.
… almost write themselves! 🙂
(Yes, I’m a dad and love Dad jokes.)
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
FDA Approves Most Expensive Drug Ever, a $3.5 Million-per-Dose Gene Therapy For Hemophilia B
Google Says Google and Other Android Manufacturers Haven’t Patched Security Flaws
Neckties strangle clear thinking. — Lin Yutang

source

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles