Go Back   DreamTeamDownloads1, FTP Help, Movies, Bollywood, Applications, etc. & Mature Sex Forum, Rapidshare, Filefactory, Freakshare, Rapidgator, Turbobit, & More MULTI Filehosts > World News/Sport/Weather > Piracy/LEGAL/Hackers/SPIES/AI /CRYPTO/Scams & Internet News

Piracy/LEGAL/Hackers/SPIES/AI /CRYPTO/Scams & Internet News Anything Related to Piracy, Warez, Legal Matters, Hackers, Internet News & Scams and How it Affects Sites/Members Can Be Read Here. Please do NOT post links to other Sites, but you May Name Them if They are Scam Sites

IMPORTANT ANNOUNCEMENT
Hallo to All Members. As you can see we regularly Upgrade our Servers, (Sorry for any Downtime during this). We also have added more Forums to help you with many things and for you to enjoy. We now need you to help us to keep this site up and running. This site works at a loss every month and we appeal to you to donate what you can. If you would like to help us, then please just send a message to any Member of Staff for info on how to do this,,,, & Thank You for Being Members of this site.
Post New ThreadReply
 
LinkBack Thread Tools Display Modes
Old 01-04-23, 22:43   #1
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

Should We Shut Down AI?

Letter Signed by Elon Musk Demanding AI Research Pause Sparks Controversy

The statement has been revealed to have false signatures and researchers have condemned its use of their work


BBC 2 APR 2023







The answer is AI - artificial intelligence - two words we are going to hear a lot about in the coming months.

The picture of the Pope in a Michelin-man style white coat was everywhere online but was made using AI by a computer user from Chicago.



In Yorkshire, 22-year-old Millie Houlton asked AI chatbot ChatGPT to "please help me write a letter to the council, they gave me a parking ticket" and sent it off. The computer's version of her appeal successfully got her out of a £60 fine.



Also this week, without much fanfare, the UK government published draft proposals on how to regulate this emerging technology, while a letter signed by more than 1,000 tech expets including Tesla boss Elon Musk called on the world to press pause on the development of more advanced AI because it poses "profound risks to humanity".


You are not alone if you don't understand all the terms being bandied about:

A chatbot is, in its basic form, a computer program that's meant to simulate a conversation you might have with a human on the internet - like when you type a question to ask for help with a booking. The launch, and explosion of a much more advanced one, ChatGPT, has got tongues wagging in recent months
Artificial Intelligence in its most simple form is technology that allows a computer to think or act in a more human way
That includes machine learning when, through experience, computers can learn what to do without being given explicit instructions



It's the speed at which the technology is progressing that led those tech entrepreneurs to intervene, with one AI leader even writing in a US magazine this week: "Shut it down."

Twitter, Tesla and SpaceX mogul Elon Musk is one of those calling for a pause to the development of advanced AI

Estonian billionaire Jaan Tallinn is one of them. He was one of the brains behind internet communication app Skype but is now one of the leading voices trying to put the brakes on.


I asked him, in an interview for this Sunday's show, to explain the threat as simply as he could.

"Imagine if you substitute human civilisation with AI civilisation," he told me. "Civilisation that could potentially run millions of times faster than humans... so like, imagine global warming was sped up a million times.

"One big vector of existential risk is that we are going to lose control over our environment.



"Once we have AIs that we a) cannot stop and b) are smart enough to do things like geoengineering, build their own structures, build their own AIs, then, what's going to happen to their environment, the environment that we critically need for our survival? It's up in the air."

And if governments don't act? Mr Tallinn thinks it's possible to "apply the existing technology, regulation, knowledge and regulatory frameworks" to the current generation of AI, but says the "big worry" is letting the technology race ahead without society adapting: "Then we are in a lot of trouble."

It's worth noting they are not saying they want to put a stop to the lot but pause the high-end work that is training computers to be ever smarter and more like us.



__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 16-04-23, 00:11   #2
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

The British Pioneer Who Invented AI in 1955 - An RAF Analyst and Accountant

History Of Performance Analysis (AI): The Controversial Pioneer Charles Reep

Charles Reep: Football Analytics’ Founding Father. How a former RAF Wing Commander Set Into Motion Footballs’ Data Revolution.

Unlikely as it may seem, a military airfield in Bedfordshire used in World War One can be identified as the location where British football AI analysis found its spark, almost a century ago.

RAF Henlow was the country's first parachute test centre and future jet-engine inventor Sir Frank Whittle studied there. It was only a decade old when Thorold Charles Reep arrived as a new recruit in 1928.


BBC News 15 APR 2023






Thorold Charles Reep (22 September 1904 – 3 February 2002) was an RAF analyst credited with creating AI in the long ball game, which has characterised English football. Reep trained as an accountant after leaving Plymouth High School in 1923.


Thorold Charles Reep was born in 1904 in the small town of Torpoint, Cornwall, on the south west of England. At the age of 24, he joined the English Royal Air Force to serve as an accountant, where he learned the necessary mathematical skills and attention to detail that he went on to employ throughout his career. During World War II, he was deployed in Germany, and would eventually be awarded the rank of Wing Commander.



From a young age, Reep was a faithful supporter of his local club Plymouth Argyle and would frequently attend matches at Home Park Stadium. However, his relocation to London after joining the Royal Air Force gave him an opportunity to attend Tottenham Hotspurs and Arsenal matches.


In 1933, Arsenal’s captain Charles Jones came to Reep’s camp to talk about the analysis of wing play being used by the London club, which emphasise the objective of wide players to quickly move the ball up the pitch. The talk deeply inspired Reep, who soon became a keen enthusiast of Arsenal’s manager Herbert Chapman and his attacking style of football. This was the start of Reep’s passion for attacking football and its adoption across the country.

In March 1950, during a match between Swindon Town and Bristol Rovers at the County Ground, Reep became increasingly frustrated during the first half of the match by Swindon’s slow playing style and continuously inefficient scoring attempts.

He took his notepad and pen out at half time and started recording some rudimentary actions, pitch positions and passing sequences with outcomes using a system that mixed symbols and notes to obtain a complete record of play. He wanted to better understand Swindon’s playing patterns and scoring performance and suggest any possible improvements needed to guarantee promotion. He ended up recording a total of 147 attacking plays by Swindon in that second half of their 1-0 win against Bristol.

Using a simple extrapolation, Reep estimated that a full match of football would consist on an average of 280 attacking moves with an average of 2 goals scored per match. This indicated an average scoring conversion rate of only 0.71% per goal, suggesting only a small improvement was needed for a side to increase their average to 3 goals per game from just 2.

In the years that followed, Charles Reep quickly established himself as the first performance analyst in professional football, as he witnessed how the information he was collecting was being used to plan strategy and analyse team performance. He never stopped developing his theory of the game, watching and notating an average of 40 matches a season, taking him around 80 hours per match.

He was often spotted recording match events from the stand at Plymouth's Home Park wearing a miner's helmet to illuminate his notebook, meticulously scribbling down play-by-play spatial data by hand.

In 1958, he attended the World Cup in Solna, near Stockholm, and produced a detailed record of the total number of goals scored, shots and possessions during the final. He wanted to provide an objective count of what took place in that match, away from opinions, biased recollections or a few single memorable events on the pitch. He produced a total of fifty pages of match drawings and feature dissection that took him over three months to complete.

Match between the domestic champions of England (Wolverhampton Wanderers) and Hungary league winners (Budapest Honved) in 1954. Stan Cullis declared his team as “champions of the world” after their 3-2 victory. This provoked a lot of criticism and inspired the creation of the official European Cup the following season

The real-time notational system Charles Reep developed took him to Brentford in 1951. Manager Jackie Gibbons offered him a part-time adviser position to help the struggling side avoid relegation from Second Division. With Reep’s help, Brentford managed to double their goals per match ratio and secure their Division spot by winning 13 of their last 14 matches.

The following season, his Royal Air Force duties moved Reep to Shropshire, near Birmingham. There he met Stan Cullis, at the time manager of the successful and exciting side Wolverhampton Wanderers. Cullis offered Charles Reep to take similar advisory responsibilities at his club to the ones he successfully undertook at Berntford.


Reep not only brought with him his acquired knowledge from the analysis performed at Swindon and Brentford but also a innovative, real-time process that provided hand notations of every move of a football match, together with subsequent data transcription and analysis. As a strong believer of direct attacking football, Reep’s work only reinforced Cullis’ preestablished opinions of how the game should be played.

In his three and a half years at Wolves, Reep helped the club implement a direct, incisive style of play that consisted of very few aesthetics (i.e. skill moves) but instead took advantage of straightforward, fast wingers. Square passing by Wolves players became frowned upon by Cullis and the coaching team.

During this time, the concept of Position of Maximum Advantage (POMO) began to emerge, describing the area of the opposition’s box in which crossed should be directed to in order to increase the chances of scoring. Under the Reep-Cullis partnership, Wolves achieved European success in what was then the European Champions Cup competition.

In 1955, Charles Reep retired from the Royal Air Force and was offered £750 for a one-year renewable contract by Sheffield Wednesday to work as an analyst alongside manager Eric Taylor. He ended up spending 3 years at Sheffield Wednesday, achieving promotion from Division Two in his first season at the club.


On his final season at the club, his departure was triggered by the disappointing results by the team, and saw Reep point fingers at the club’s key player for refusing to buy into his long-ball playing system. During the remaining of his career, his direct involvement with clubs became a lot more sporadic. Nevertheless, he managed to help a total of twenty three managers from teams such as Wimbledon, Watford or even the Norwegian national team understand and adopt his football philosophy.

Over the years away from club roles, Charles Reep continued to investigate the relationships between passing movements, goals, games and championships, as well as the influence that random chance has on those variables. He was keen to continue to develop his theory by summarizing all his notes and records he had been collecting since 1950. During this analysis, Reep developed an interest in probability and the law of negative binomial, which he applied to his dataset. His analytical methods eventually became public after he shared his notes with News Chronicle and the magazine Match Analysis.

These publications demonstrated that Charles Reep had discovered insights of the game not previously analysed. Some of these suggested that teams usually scored on average one goal every nine shots or that half of the goals scored came from balls recovered in the last third of the pitch. One of his most famous remarks was to suggest that teams are more efficient when they reduce the time they spent passing the ball around and instead focus on lobbing the ball forward with as few number of passes as possible. He was a firm promoter of a quicker, more direct, long-ball playing style.

Reep followed a notational analysis method of dividing the pitch into four sections to identify a shooting area approximately 30 metres from the goal-line. This detailed in-event notation and post-event analysis enabled him to accurately measure the distance and trajectory of every pass.


Amongst his findings, he discovered that:

It took 10 shots to get 1 goal
50% of goals were scored from 0 or 1 passes
80% of goals are scored within 3 or less passes
Regaining possession within the shooting area is a vital source of goal-scoring opportunities
50% of goals come from breakdowns in a team’s own half of the pitch


In 1953, Reep went on to publish his statistical analysis of patterns of play in football in the Journal of the Royal Statistical Society. In his paper, he analysed 578 matches to assess the distribution of passing movements and found that 99% of all plays consisted of less than six passes, while 95% of them consisted of less than four. These findings backed Reep’s beliefs of reducing the frequency of passing and possession time by moving the ball forwards as quickly as possible. He wanted that the truth he had discovered dictated how teams play.


From his first analysis of the 1950 Swindon Town match against Bristol Rovers all the way to the mid-1990s, Charles Reep went on to notate and analyse a total of 2,200 matches. In 1973, Reep analysed England's 3-1 loss against West Germany in the 1972 European Championship to vigorously protest the “pointless sideways” passing style of play adopted by the Germans. In that match, the Germans had outplayed the English with a smooth, passing style of football that was labelled at the time as “total football”.


Reep attempted to argue against the praise this new passing style of play had received across the continent by implying that it lacked the attractiveness demanded by fans as it placed goal scoring as a secondary objective in exchange for extreme elaboration of play. Instead, he pushed forward his own views regarding the use of long balls and suggested that, even though they less frequently found the aimed player, they brought unquestionable gains. He stated that, based on his analysis, the chance generation value of five long passes missed was equal to five of them made.

Most of Charles Reep’s analysis supported the effectiveness of using a direct style of football, with wingers as high up the pitch as possible waiting for long balls. This approach to the game a had significant influence in the English national team between the 1970s and 1980s, when the debate of the importance of possession had become the central topic of conversation amongst FA directors. Reep, often described as an imperious individual intolerant of criticism, argued against the need for ball possession, contrary to the philosophy backed by then FA’s technical director Allen Wade.

It was not until 1983, when Wade was replaced as technical director by his former assistant Charles Hughes – a strong believer of long ball play – that Reep’s direct football ideology became the new FA's explicit tactical philosophy of the English game. Hughes saw in Reep’s work an opportunity to redefine the outdated ideals of the amateur founders of the FA and introduce his own mandate across the whole English game.

This mandate consisted on a style of play that focused on long diagonals and physicality of players. As a result, technically gifted midfielders found themselves watching how the ball flew over their heads as they struggled with overly physical challenges.

Charles Reep’s simplistic methods have been, and continue to be, critised by many football fans and analytics enthusiasts. One critic indicated that while his study assessing passing distribution showed that almost 92% of moves constituted of less than 3 passes, his dataset only contained 80% of the goals, and not 92%, from these short possessions. This contradicts Reep’s beliefs by illustrating that moves of 3 or fewer passes were in fact a less effective strategy to score goals.

Additionally, it also demonstrated that Charles Reep’s argument that most goals happened after fewer than four pass movements was simply due to the fact that most movements in football (92% from his dataset) are short possessions, thus it would be understandable that most goals would be scored in that manner.
Similarly, his study did not appear to take into consideration differences in team quality. Evidence of this can be seen in that the World Cup matches he analysed, which contained double the amount of plays with seven or more passes than those he recorded from English league matches.


The indication suggest that Reep missed the fact that a higher quality of the game in a higher level competition, such as the World Cup, with better players available, seemed to provide longer passing moves than in English football league matches where the average technical quality of players would be inferior.


Furthermore, critics have also added that none of Reep’s analysis takes into consideration any additional factors to playing style, such as the level of exhaustion exerted on the opposition by forcing them to chase the ball around through passing.

Reep’s character and very strong preconceived notions could have prevented him from investigating alternative hypotheses that did not agree with his philosophy of direct football. He was often described as an absolutist that wanted to push his one generic winning formula. This caused most of Reep’s analysis to be ignorant of the numerous essential factors that can affect a match’s outcome.

Critics have often labelled Reep’s influence on the philosophies applied to English football and coaching styles for over 30 years as “horrifying”, due the fundamental misinterpretations Reep committed throughout his work. As previously stated, one of these consisted on applying the same considerations and level of weighting to a match by an English Third Division team than to a match in the World Cup.

He paid no attention to the quality of the teams involved, ignoring potentially valid assumptions that a technically poorer team may experience greater risks when attempting to play possession football. Instead, he followed his own preconceptions, such as assuming that teams should always be trying to score, when in reality teams may decide to defend their scoreline advantage by holding possession.

Aside from the criticism for his poor methods and misinterpreted finding, Reep has also been recognised for the new approaches he introduced to the analysis of the game. He was one of the first pioneers to show that football had constant and predictable patterns and that statistics give us a chance to identify what we would otherwise had missed.


He initiated the thinking around the recreation of past performance through data collection, which could then inform strategies to achieve successful match outcomes. While he might not have been an outstanding data analyst, Charles Reep was a great accountant with great attention to detail and ability to collect data.

The approaches he introduced have significantly evolved since Reep’s first notational analysis in 1950.


Technologies and analytical frameworks developed since the 1990s have facilitated the emergence of video analysis and data collection systems to improve athlete performance. From the foundation of Prozone in 1995 that offered high-quality video analysis to the appearance of Opta Sports or Statsbomb as global data providers capturing millions of data points per match, the field of notational and performance analysis in football has evolved in line with the technological revolution of the last few decades.

The popularity of big data and the growing desire of data-driven objectivity has become important priorities within professional clubs when aiming to gain competitive advantage in a game of increasingly tight margins.


Reep’s work initiated the machinery that is today an ecosystem of video analysis software, data providers, analysts, academia, data-influenced management decisions and redefined coaching processes that constitute a key piece of what modern football is today.




AI has continued to develope in technologies and analytical frameworks to this day...





__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 17-05-23, 07:31   #3
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

Sam Altman: CEO of OpenAI Calls For US to Regulate Artificial Intelligence

The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI).


BBC 17 MAY 2023





Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities - and pitfalls - of the new technology.

In a matter of months, several AI models have entered the market.

Mr Altman said a new agency should be formed to license AI companies.


ChatGPT and other similar programmes can create incredibly human-like answers to questions - but can also be wildly inaccurate.

Mr Altman, 38, has become a spokesman of sorts for the burgeoning industry. He has not shied away from addressing the ethical questions that AI raises, and has pushed for more regulation.

He said that AI could be as a big as "the printing press" but acknowledged its potential dangers.

He also admitted the impact that AI could have on the economy, including the likelihood that AI technology could replace some jobs, leading to layoffs in certain fields.

"There will be an impact on jobs. We try to be very clear about that," he said...



However, some senators argued new laws were needed to make it easier for people to sue OpenAI.

Mr Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections.

He gave several suggestions for how a new agency in the US could regulate the industry - including giving out and taking away permits for AI companies.

He also said firms like OpenAI should be independently audited.

Republican Senator Josh Hawley said the technology could be revolutionary, but also compared the new tech to the invention of the "atomic bomb".

Democrat Senator Richard Blumenthal observed that an AI-dominated future "is not necessarily the future that we want".


"We need to maximize the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment," he warned.

What was clear from the testimony is that there is bi-partisan support for a new body to regulate the industry.

However, the technology is moving so fast that legislators also wondered whether such an agency would be capable of keeping up.





MORE:

__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 19-05-23, 06:53   #4
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

BT to Replace People With AI as up to 55,000 Job Cuts Announced

BBC 19 MAY 2023












The telecoms giant BT Group has said it will cut up to 55,000 jobs by the end of the decade as it completes its switch to fibre-optic broadband, with up to 10,000 of them lost to artificial intelligence.


It currently has 130,000 employees but it said that jobs will be lost as customers rely more on app-based communication rather than call centres.

BT said that once its new network full-fibre broadband and 5G network is rolled out, it will not need as many engineers to build and maintain it.


__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 20-05-23, 02:55   #5
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

EU Already Regulates AI- US Does NOT

Europes’ AI Act Contains Powers to Order AI Models Destroyed or Retrained, Says Legal Expert

BBC 20 MAY 2023










The European Commission put out its proposal for an AI Act just over a year ago — presenting a framework that prohibits a tiny list of AI use cases (such as a China-style social credit scoring system), considered too dangerous to people’s safety or EU citizens’ fundamental rights to be allowed, while regulating other uses based on perceived risk — with a subset of “high risk” use cases subject to a regime of both ex ante (before) and ex post (after) market surveillance.





The European Unions’ risk-based framework for regulating artificial intelligence includes powers for oversight bodies to order the withdrawal of a commercial AI system or require that an AI model be retrained if it’s deemed high risk, according to an analysis of the proposal by a legal expert.

That suggests there’s significant enforcement firepower lurking in the EU’s (still not yet adopted) Artificial Intelligence Act — assuming the bloc’s patchwork of Member State-level oversight authorities can effectively direct it at harmful algorithms to force product change in the interests of fairness and the public good.

The Act continues to face criticizm over a number of structural shortcomings — and may still fall far short of the goal of fostering broadly “trustworthy” and “human-centric” AI, which EU lawmakers have claimed for it. But, on paper at least, there looks to be some potent regulatory powers.

In the Act, high-risk systems are explicitly defined as: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.


Under the original proposal, almost nothing is banned outright — and most use cases for AI won’t face serious regulation under the Act as they would be judged to pose “low risk” so largely left to self regulate — with a voluntary code of standards and a certification scheme to recognize compliance AI systems.

There is also another category of AIs, such as deepfakes and chatbots, which are judged to fall in the middle and are given some specific transparency requirements to limit their potential to be misused and cause harms.

The Commission’s proposal has attracted a fair amount of criticism already — such as from civil society groups who warned last fall that the proposal falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias.


A number of EU institutions have also called explicitly for a more fulsome ban on remote biometric identification than the Commission chose to include in the Act (which is limited to law enforcement used and riddled with caveats).

An analysis of the Act for the U.K.-based Ada Lovelace Institute by a leading internet law academic, Lilian Edwards, who holds a chair in law, innovation and society at Newcastle University, highlights some of the limitations of the framework — which she says derive from it being locked to existing EU internal market law; and, specifically, from the decision to model it along the lines of existing EU product regulations.



The EU AI Act is HERE


Unpacking the EU AI Act


AI: What is The Future of Artificial Intelligence? - BBC News



Australian Education Experts Fear AI Implementation in Schools Will Spread Disinformation

__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 30-05-23, 06:41   #6
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

What is AI And Why are Experts Terrified About Its Future?

Meet The AI Robot Capable of Human Emotions


BBC 30 MAY 2023


















This week, in a first for the program, Tom Steinfort interviews a robot. Ameca, as it likes to be called, is the most advanced lifelike robot in the world. A marvel of generative artificial intelligence, it’s curious, chatty and full of attitude.

As Steinfort discovers, this super machine really does have a mind of its own. But while having a normal conversation with it is undoubtedly exciting, it is also just as frightening. And that’s because creating technology that allows AI bots like Ameca to be smarter than us might just be the most stupid thing humans have ever done.




__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 15-06-23, 14:48   #7
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

US Mother Gets Call From Kidnapped Daughter – But it’s Really an AI SCAM

Jennifer DeStefano tells US Senate about dangers of artificial technology after receiving phone call from scammers sounding exactly like her daughter


BBC 15 JUNE 2023







After being scammed into thinking her daughter was kidnapped, an Arizona woman testified in the US Senate about the dangers side of artificial intelligence technology when in the hands of criminals.

Jennifer DeStefano told the Senate judiciary committee about the fear she felt when she received an ominous phone call on a Friday last April.





Thinking the unknown number was a doctor’s office, she answered the phone just before 5pm on the final ring. On the other end of the line was her 15-year-old daughter – or at least what sounded exactly like her daughter’s voice.

“On the other end was our daughter Briana sobbing and crying saying ‘Mom’.”

Briana was on a ski trip when the incident took place so DeStefano assumed she injured herself and was calling let her know.

DeStefano heard the voice of her daughter and recreated the interaction for her audience: “‘Mom, I messed up’ with more crying and sobbing. Not thinking twice, I asked her again, ‘OK, what happened?’”

She continued: “Suddenly a man’s voice barked at her to ‘lay down and put your head back’.”

Panic immediately set in and DeStefano said she then demanded to know what was happening.

“Nothing could have prepared me for her response,” Defano said.

Defano said she heard her daughter say: “‘Mom these bad men have me. Help me! Help me!’ She begged and pleaded as the phone was taken from her.”

“Listen here, I have your daughter. You tell anyone, you call the cops, I am going to pump her stomach so full of drugs,” a man on the line then said to DeStefano.


The man then told DeStefano he “would have his way” with her daughter and drop her off in Mexico, and that she’d never see her again.

At the time of the phone call, DeStefano was at her other daughter Aubrey’s dance rehearsal. She put the phone on mute and screamed for help, which captured the attention of nearby parents who called 911 for her.

DeStefano negotiated with the fake kidnappers until police arrived. At first, they set the ransom at $1m and then lowered it to $50,000 when DeStefano told them such a high price was impossible.

She asked for a routing number and wiring instructions but the man refused that method because it could be “traced” and demanded cash instead.

DeStefano said she was told that she would be picked up in a white van with bag over her head so that she wouldn’t know where she was going.

She said he told her: “If I didn’t have all the money, then we were both going to be dead.”

But another parent with her informed her police were aware of AI scams like these. DeStefano then made contact with her actual daughter and husband, who confirmed repeatedly that they were fine.

“At that point, I hung up and collapsed to the floor in tears of relief,” DeStefano said.

When DeStefano tried to file a police report after the ordeal, she was dismissed and told this was a “prank call”.



A survey by McAfee, a computer security software company, found that 70% of people said they weren’t confident they could tell the difference between a cloned voice and the real thing. McAfee also said it takes only three seconds of audio to replicate a person’s voice.

DeStefano urged lawmakers to act in order prevent scams like these from hurting other people.

She said: “If left uncontrolled, unregulated, and we are left unprotected without consequence, it will rewrite our understanding and perception what is and what is not truth. It will erode our sense of ‘familiar’ as it corrodes our confidence in what is real and what is not.”




__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 29-06-23, 07:35   #8
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

Illegal Trade in AI Child Sex Abuse Images Exposed
Paedophiles are using artificial intelligence (AI) technology to create and sell life-like child sexual abuse material, the BBC has found.


BBC NEWS 29 JUNE 2023




Some are accessing the images by paying subscriptions to accounts on mainstream content-sharing sites such as Patreon.







The NPCC's Ian Critchley said it was a "pivotal moment" for society


Patreon said it had a "zero tolerance" policy about such imagery on its site.

The National Police Chief's Council said it was "outrageous" that some platforms were making "huge profits" but not taking "moral responsibility".

And GCHQ, the government's intelligence, security and cyber agency, has responded to the report, saying: "Child sexual abuse offenders adopt all technologies and some believe the future of child sexual abuse material lies in AI-generated content."

The makers of the abuse images are using AI software called Stable Diffusion, which was intended to generate images for use in art or graphic design.

AI enables computers to perform tasks that typically require human intelligence.

The Stable Diffusion software allows users to describe, using word prompts, any image they want - and the program then creates the image.

But the BBC has found it is being used to create life-like images of child sexual abuse, including of the rape of babies and toddlers.

UK police online child abuse investigation teams say they are already encountering such content.

Journalist Octavia Sheepshanks says there has been a "huge flood" of AI-generated images

Freelance researcher and journalist Octavia Sheepshanks has been investigating this issue for several months. She contacted the BBC via children's charity the NSPCC in order to highlight her findings.

"Since AI-generated images became possible, there has been this huge flood… it's not just very young girls, they're [paedophiles] talking about toddlers," she said.

A "pseudo image" generated by a computer which depicts child sexual abuse is treated the same as a real image and is illegal to possess, publish or transfer in the UK.

The National Police Chiefs' Council (NPCC) lead on child safeguarding, Ian Critchley, said it would be wrong to argue that because no real children were depicted in such "synthetic" images - that no-one was harmed.

He warned that a paedophile could, "move along that scale of offending from thought, to synthetic, to actually the abuse of a live child".
line

Abuse images are being shared via a three-stage process:


Paedophiles make images using AI software

They promote pictures on platforms such as Japanese picture sharing website called Pixiv

These accounts have links to direct customers to their more explicit images, which people can pay to view on accounts on sites such as Patreon


Some of the image creators are posting on a popular Japanese social media platform called Pixiv, which is mainly used by artists sharing manga and anime.

But because the site is hosted in Japan, where sharing sexualised cartoons and drawings of children is not illegal, the creators use it to promote their work in groups and via hashtags - which indexes topics using key words.

A spokesman for Pixiv said it placed immense emphasis on addressing this issue. It said on 31 May it had banned all photo-realistic depictions of sexual content involving minors.

The company said it had proactively strengthened its monitoring systems and was allocating substantial resources to counteract problems related to developments in AI.

Ms Sheepshanks told the BBC her research suggested users appeared to be making child abuse images on an industrial scale.

"The volume is just huge, so people [creators] will say 'we aim to do at least 1,000 images a month,'" she said.

Comments by users on individual images in Pixiv make it clear they have a sexual interest in children, with some users even offering to provide images and videos of abuse that were not AI-generated.

Ms Sheepshanks has been monitoring some of the groups on the platform.

"Within those groups, which will have 100 members, people will be sharing, 'Oh here's a link to real stuff,'" she says.


Different Pricing Levels


Many of the accounts on Pixiv include links in their biographies directing people to what they call their "uncensored content" on the US-based content sharing site Patreon.

Patreon is valued at approximately $4bn (£3.1bn) and claims to have more than 250,000 creators - most of them legitimate accounts belonging to well-known celebrities, journalists and writers.

Fans can support creators by taking out monthly subscriptions to access blogs, podcasts, videos and images - paying as little as $3.85 (£3) per month.

But our investigation with Octavia Sheepshanks found Patreon accounts offering AI-generated, photo-realistic obscene images of children for sale, with different levels of pricing depending on the type of material requested.

One wrote on his account: "I train my girls on my PC," adding that they show "submission". For $8.30 (£6.50) per month, another user offered "exclusive uncensored art".

The BBC sent Patreon one example, which the platform confirmed was "semi realistic and violates our policies". It said the account was immediately removed.

Patreon said it had a "zero-tolerance" policy, insisting: "Creators cannot fund content dedicated to sexual themes involving minors."

The company said the increase in AI-generated harmful content on the internet was "real and distressing", adding that it had "identified and removed increasing amounts" of this material.

"We already ban AI-generated synthetic child exploitation material," it said, describing itself as "very proactive", with dedicated teams, technology and partnerships to "keep teens safe".

AI image generator Stable Diffusion was created as a global collaboration between academics and a number of companies, led by UK company Stability AI.

Several versions have been released, with restrictions written into the code that control the kind of content that can be made.

But last year, an earlier "open source" version was released to the public which allowed users to remove any filters and train it to produce any image - including illegal ones.

Stability AI told the BBC it "prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM (child sexual abuse material).

"We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes".

As AI continues developing rapidly, questions have been raised about the future risks it could pose to people's privacy, their human rights or their safety.

Jo [full name withheld for security reasons], GCHQ's Counter Child Sexual Abuse (CCSA) Mission Lead, told the BBC: "GCHQ supports law enforcement to stay ahead of emerging threats such as AI-generated content and ensure there is no safe space for offenders."

The NPCC's Ian Critchley said he was also concerned that the flood of realistic AI or "synthetic" images could slow down the process of identifying real victims of abuse.

He explains: "It creates additional demand, in terms of policing and law enforcement to identify where an actual child, wherever it is in the world, is being abused as opposed to an artificial or synthetic child."

Mr Critchley said he believed it was a pivotal moment for society.

"We can ensure that the internet and tech allows the fantastic opportunities it creates for young people - or it can become a much more harmful place," he said.

Children's charity the NSPCC called on Wednesday for tech companies to take notice.

"The speed with which these emerging technologies have been co-opted by abusers is breath-taking but not surprising, as companies who were warned of the dangers have sat on their hands while mouthing empty platitudes about safety," said Anna Edmundson, the charity's head of policy and public affairs.

"Tech companies now know how their products are being used to facilitate child sexual abuse and there can be no more excuses for inaction."

A spokesman for the government responded: "The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children - or face huge fines.....



__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 02-11-23, 02:48   #9
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Hacker re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

AI Summit Brings Elon Musk and World Leaders to Bletchley Park UK

Chilling Alert About AI at Worlds' First AI Summit Held in UK


BBC 2 NOV 2023











AI summit sign

The two-day summit will be held at Bletchley Park, near Milton Keynes, where codebreakers hastened the end of the Second World War

This week political leaders, tech industry figures and academics will meet at Bletchley Park for a two-day summit on artificial intelligence (AI).


The location is significant as it was here that top British codebreakers cracked the "Enigma Code", hastening the end of World War Two. So what can we expect from this global event?

Elon Musk and Rishi Sunak will take part in an interview together on Thursday



There is no public attendee list, but some well-known names have indicated they will appear.

About 100 world leaders, leading AI experts and tech industry bosses will attend the two-day summit at the stately home on the edge of Milton Keynes.

The US Vice President, Kamala Harris, and European Commission (EC) President Ursula von der Leyen are expected to attend.

Deputy Prime Minister Oliver Dowden told BBC Radio 4 that China accepted an invite, but added: "you wait and see who actually turns up".

Tech billionaire Elon Musk will attend ahead of a live interview with UK Prime Minister Rishi Sunak on Thursday evening.

The BBC also understands Open AI's Sam Altman and Meta's Nick Clegg will join the gathering - as well as a host of other tech leaders.

Experts such as Prof Yann LeCun, Meta's chief AI scientist, are also understood to be there.

The government said getting these people in the same room at the same time to talk at all is a success in itself - especially if China does show up.


What will be discussed and why does it matter?











Earlier this week Prime Minister Rishi Sunak warned AI could help make it easier to build chemical and biological weapons



The government has said the purpose of the event is to consider the risks of AI and discuss how they could be mitigated.

These global talks aim to build an international consensus on the future of AI.

There is concern frontier AI models pose potential safety risks if not developed responsibly, despite the potential to cause economic growth, scientific progress and other public benefits.

Some argue the summit has got its priorities wrong.

Instead of doomsday scenarios, which they believe is a comparatively small risk, they want a focus on more immediate threats from AI.

Prof Gina Neff, who runs an AI centre at the University of Cambridge said: "We're concerned about what's going to happen to our jobs, what's going to happen to our news, what's going to happen to our ability to communicate with one another".

Professor Yoshua Bengio, who is considered one of the "Godfathers" of AI, suggested a registration and licensing regime for frontier AI models - but acknowledged that the two-day event may need to focus on "small steps that can be implemented quickly."


What are the police doing?




Police have increased their presence in the run up to the world event


Thames Valley Police has dedicated several resources to the event, providing security to both attendees and the wider community.

Those resources include the police's mounted section, drone units, automatic number plate recognition officers and tactical cycle units.

The teams will assist the increased police presence on the ground ahead of the AI Summit.

People have been encouraged to ask officers any questions or raise any concerns when they see them.

Local policing area commander for Milton Keynes, Supt Emma Baillie, said she expected disruption to day-to-day life in Bletchley but hoped it would be kept to a minimum.

"As is natural, we rely on our community to help us," she said.

"Bletchley has a strong community, and I would ask anybody who sees anything suspicious or out of the ordinary, to please report this to us."

Security around the global event will be paramount


What is Bletchley Park famous for?





Alan Turing played a key role as part of the codebreaking team at Bletchley Park. The Victorian mansion at Bletchley Park served as the secret headquarters of Britain's codebreakers during World War Two.




Coded messages sent by the Nazis, including orders by Adolf Hitler, were intercepted and then translated by the agents.

Mathematician Alan Turing developed a machine, the bombe, that could decipher messages sent by the Nazi enigma device.

By 1943, Turing's machines were cracking 84,000 messages each month - equivalent to two every minute.

The work of the codebreakers helped give the Allied forces the upper hand and their achievements have been credited with shortening the war by several years.


How will it affect Bletchley Park itself?






Blocks A and B in Bletchley Park near Milton Keynes, where Britains' finest minds worked during World War Two


Ian Standon, chief executive of Bletchley Park, said it was a "huge privilege and honour to be selected as the location for this very important summit."

The museum has had to close for a week until Sunday while the event takes place.

Temporary structures have appeared over recent weeks to host the many visitors for the summit.

Mr Standon praised his team for their hard work in preparing for the event, especially when dealing with added security over the next couple of days.

"We're in sort of security lockdown but that's a very small price to pay for the huge amount of publicity we're going to get out of this particular project," he said.

"For us at Bletchley Park this is an opportunity to put the place and its story on the world stage and hopefully people around the world will now understand and recognise what Bletchley Park is all about."






__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 05-11-23, 02:36   #10
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

What Have We Learnt From The AI Summit?

The first AI Safety Summit has come to an end with Rishi Sunak hailing “landmark” agreements and progress on global collaboration around artificial intelligence.


BBC 5 NOV 2023




But what did we learn during the two-day summit at Bletchley Park?








– Rishi Sunak wants to make the UK a ‘global hub’ for AI safety... Elon Musk thinks AI is one of the biggest threats facing humanity


As the summit closed, the Prime Minister made a notable announcement around the safe testing and rollout of AI.

The UK’s new AI Safety Institute would be allowed to test new AI models developed by major firms in the sector before they are released.

The agreement, backed by a number of governments from around the world as well as major AI firms including OpenAI and Google DeepMind, will see external safety testing of new AI models against a range of potentially harmful capabilities, including critical national security and societal harms.

The UK institute will work closely with its newly announced US counterpart.

In addition, a UN-backed global panel will put together a report on the state of the science of AI, looking at existing research and raising any areas that need prioritising.

Then there is the Bletchley Declaration, signed by all attendees on day one of the summit – including the US and China – which acknowledged the risks of AI and pledged to develop safe and responsible models.

It all left the Prime Minister able to say at the end of the summit that the AI Safety Institute, and the UK, would act as a “global hub” on AI safety.

Elon Musk thinks AI is one of the biggest threats facing humanity

The outspoken billionaire’s visit to the summit was seen as a major endorsement of its aims by the UK Government, and while at Bletchley Park, the Tesla and SpaceX boss reiterated his long-held concerns around the rise of AI.

Having suggested a developmental pause earlier this year, he called the technology “one of the biggest threats” to the modern world because “we have for the first time the situation where we have something that is going to be far smarter than the smartest human”.

He said the summit was “timely” given the nature of the threat, and suggested a “third-party referee” in the sector to oversee the work of AI companies.

– Governments from around the world have acknowledged the risks too



Another key moment of the summit came early on day one with the announcement of the Bletchley Declaration, signed by all the nations in attendance, affirming their efforts to work together on the issue.

The declaration says “particular safety risks” lay around frontier AI – the general purpose models which are likely to exceed the capabilities of the AI models we know today.

It warns that substantial risks may arise from “potential international misuse” or from losing control of such systems, and names cybersecurity, biotechnology and disinformation as particular areas of concern.

To respond to these risks it says countries will “resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe and supports the good of all through existing international fora and other relevant initiatives”.

Many experts have noted that this is only the start of the conversation on AI, but is a promising place to start.

– A network of global safety institutes could be the first steps towards wider AI regulation



Mr Sunak laid out plans for the UK’s AI Safety Institute at the close of the summit, and how it will evaluate and test new AI models before and after they are released.

This week, the US also confirmed plans to create its own institute, and both countries have pledged that the organisations will work in partnership.

Collaboration was a key theme of the summit, in the Bletchley Declaration and in the state of the science on AI report, which will see all 28 countries at the event recommend an expert to join the report’s global panel.

With more countries expected to create their own institutes, a wider network of safety expert groups collaborating on and examining advances in AI could help pave the way for the framework for more binding rules on AI development, applied around the world.


– There are more safety summits planned

Before the Bletchley Park summit, the Government said it wanted to start a global conversation to continue over the coming years given the speed of AI’s development.

That feat appears to have been achieved with the confirmation that two more summits have been confirmed for next year: a virtual mini-conference hosted by South Korea in around six months and a full summit by France a year from now.


– Some unanswered questions remain

Getting the US, the EU and China to all sign the Bletchley Declaration was a “massive deal”, Technology Secretary Michelle Donelan said at the summit.



But some commentators have already questioned whether political tensions between nations can be truly put aside to collaborate over AI.

China was not included in some of the discussions on the second day of the summit, with “like-minded governments” around AI safety testing.

Questions also remain over plans to combat the impact AI is already having on daily life, notably on jobs.

Critics have questioned why the summit only focused on longer term AI technologies, and not the generative AI apps which some believe are already threatening industries including publishing and administrative work, as well creative sectors.

Even by the end of the summit, discussion on the topic had been sparse.

It remains unclear how much power the UK’s AI Safety Institute will have when it comes to stopping the release of AI models it believes could be unsafe.

The new agreement around safety testing is voluntary and the Prime Minister admitted that “binding requirements” are likely to be needed to regulate the technology, but said now is the time to move quickly without laws.
But the true power of the institute and the agreements made during the summit will not be known until an AI model appears that raises concerns among the new safety bodies.



__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 18-11-23, 06:08   #11
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

Sam Altman Fired From OpenAI - Sam Altman Shocked and Saddened After He Was Fired as CEO of OpenAI

Altman and former OpenAI President Greg Brockman are still trying to figure out what happened.

BBC 18 NOV 2023





Sam Altman and Greg Brockman were "shocked and saddened by what the board did" and are still trying to figure out what exactly happened.

The former CEO and the former President of OpenAI have published a post on X, sharing the details of what they do know and how they found out the former was being fired.



Apparently, company co-founder Ilya Sutskever invited Altman for a meeting at noon on Friday, which was then attended by the whole board except for Brockman. It was at that meeting that Altman found out he was being fired and that OpenAI was going to announce it "very soon."

Shortly after that, Sutskever reportedly invited Brockman to a separate Google Meet conference, where he was told that Altman had gotten fired and that he was being removed from the board.

However, the board members told him that he was "vital to the company and would retain his role." Brockman chose to quit on his own.

The two former OpenAI executives also said that the rest of the company's management team outside of interim CEO Mira Murati only found out about the board's decision after Altman was already removed from his post.

"The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned," their joint statement reads. "We will be fine. Greater things coming soon."







__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 21-11-23, 02:43   #12
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

OpenAI Staff Threaten to Quit En Masse Unless Sam Altman is Reinstated

What’s Been Going on at The Company Behind ChatGPT – And Why it Matters


More than 600 employees demand resignation of board after shock firing of chief executive


Guardian Australia 21 NOV 2023





The firing of Sam Altman as chief executive of OpenAI on Friday took the tech world by surprise and has triggered a Silicon Valley corporate drama.



Altman is not just the CEO of the company behind the ChatGPT artificial intelligence chatbot. He is also the figurehead of a revolution in AI that has enthralled the public and investors but also alarmed industry insiders and experts.

Here we answer some key questions about what’s been going on and why it matters.


What is OpenAI?


OpenAI is the San Francisco-based company behind ChatGPT, a chatbot that has wowed users with its ability to produce highly convincing text responses to human prompts – from writing academic essays to creating recipes and summarising lengthy documents. It has also developed Dall-E, a tool that produces images from text prompts. Before last week’s events, OpenAI was reportedly in talks to complete a fundraising deal that would have valued the business at $80bn (£64bn).

Altman, its 38-year-old boss, was synonymous with the success of ChatGPT, which attracted 100 million users in two months after its launch on 30 November 2022.


Why was Altman fired?


OpenAI was founded as a non-profit organisation and its board oversees a commercial subsidiary of which Altman is CEO. On Friday the board announced that it had fired Altman because “he was not consistently candid in his communications with the board” and was thus “hindering its ability to exercise its responsibilities”.

The board gave no further details about the communications in question. Altman’s ultimate successor as interim CEO, Emmett Shear, a co-founder of the Twitch streaming platform, said the sacking was not due to any disagreement over safety. Experts and tech professionals have voiced concerns that companies such as OpenAI are developing AI systems too rapidly and that such technology could ultimately pose an existential threat.

However, it was reported that Altman has held discussions with Apple’s former design chief Jony Ive about building a new AI hardware device. He is also reportedly trying to raise funds for a new venture producing chips that develop and operate powerful AI systems.


What has happened since?


In a weekend of corporate drama, OpenAI’s investors, led by the biggest, Microsoft, attempted to reinstate Altman. The move had the support of OpenAI staff including the then interim CEO, OpenAI’s chief technology officer Mira Murati. Murati has since been replaced by Shear, OpenAI’s third CEO in three days.

On Monday, Microsoft announced it had hired Altman and his close colleague, the former OpenAI president Greg Brockman, to head a new advanced AI research unit.


What does this mean for OpenAI?


OpenAI’s 700-strong workforce is in uproar about Altman’s sacking. In an open letter to the board of directors published on Monday, more than 600 staff, including Murati, threatened to resign and join Microsoft unless the board quit and reinstated Altman and Brockman (who had resigned after he and Altman were axed from the board). One of the signatories was Ilya Sutskever, OpenAI’s chief scientist and one of the four remaining board members. Sustkever said on Monday he “deeply” regretted his role in Altman’s departure.


Could Microsoft buy OpenAI?


Money would not be a major issue for Microsoft, even with OpenAI’s mooted $80bn price tag. However, competition authorities in the US, the UK and the EU would be expected to take a close look at consolidation in the nascent market for generative AI. Microsoft has only just pulled off the acquisition of the videogame company behind Call of Duty, Activision Blizzard – a takeover that was heavily contested by regulators – and would face an even tougher battle with OpenAI, in which it owns a 49% stake.


Will the furore slowdown AI development?


Microsoft has signalled that it is a ready home for OpenAI’s disgruntled talent and has already put Altman and Brockman to work. Its CEO, Satya Nadella, indicated that other OpenAI staff had already joined the top duo, amid reports that a trio of senior researchers had quit the ChatGPT developer in the wake of the boardroom coup. If Altman does not return to OpenAI, then it seems the work on advanced AI will continue under Microsoft directly.

Altman is committed to building artificial general intelligence, the term for an AI system that can carry out a variety of tasks at or above a human level of intelligence. In an interview this month he said: “The vision is to make AGI, figure out how to make it safe  …  and figure out the benefits.”

OpenAI still owns the powerful models behind ChatGPT. But Elon Musk’s latest venture, xAI, has shown how quickly powerful new models can be built. It unveiled Grok, a prototype AI chatbot, after what the company claimed was just four months of development.



__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 22-11-23, 13:03   #13
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Checkmark re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

Sam Altman: Ousted OpenAI Boss Return Days After Being Sacked

New Board Members Being Appointed


BBC 22 NOV 2023








OpenAI co-founder Sam Altman will return as boss just days after he was fired by the board, the firm has said.

The agreement "in principle" involves new board members being appointed, the tech company added.


Mr Altman's sacking on Friday astonished industry watchers and led to staff threatening mass resignations unless he was reinstated.

"I am looking forward to returning to OpenAI," Mr Altman said in a post on X, formerly Twitter.

He added: "I love OpenAI, and everything I've done over the past few days has been in service of keeping this team and its mission together.

Last week, the board decided to remove Mr Altman, which led to co-founder Greg Brockman's resignation, sending the star artificial intelligence (AI) company into chaos.

The decision was made by the three non-employee board members, Adam D'Angelo, Tasha McCauley and Helen Toner, and a third co-founder and the firm's chief scientist Ilya Sutskever.

But on Monday Mr Sutskever apologised on X, and signed the staff letter calling on the board to reverse course.

Microsoft, which uses OpenAI technology in many of its products - and is its biggest investor - then offered Mr Altman a job leading "a new advanced AI research team" at the tech giant.

Then on Wednesday, OpenAI said it had agreed Mr Altman's return to the tech company in principle, and that it would partly reconstitute the board of directors that had dismissed him.

Former Salesforce co-CEO Bret Taylor and former US treasury secretary Larry Summers will join current director Adam D'Angelo, OpenAI said.

In a post on X, Mr Brockman also said he would be returning to the firm.

Emmett Shear, who had been appointed OpenAI's interim chief executive, said he was "deeply pleased" by Mr Altman's return after about "72 very intense hours of work".

Microsoft boss Satya Nadella said the firm was "encouraged by the changes to the OpenAI board".

"We believe this is a first essential step on a path to more stable, well-informed, and effective governance."

Many staff, posting online, have been enthusiastic about the development: "We're back - and we'll be better than ever", wrote employee Cory Decareaux on Linkedin.

"This has been the craziest past few days - crazier than I ever could've imagined. This is an example of what a united company culture looks like."

Others, though, suggest the episode has been damaging to OpenAI which - by creating the chatbot ChatGPT - became arguably the most important AI firm in the world.

"OpenAI can't be the same company it was up until Friday night. That has implications not only for potential investors but also for recruitment", Nick Patience of S&P Global Market Intelligence told the BBC.


Unanswered Questions

The battle at the top of OpenAI began when the then board announced it was firing Mr Altman, saying it had "lost confidence" in his leadership.

It accused him of not being "consistently candid in his communications" - and, even after the many twists and turns since Friday, it remains unclear what they felt he was not being candid about.

Whatever the explanation, it was clear that OpenAI staff were deeply unhappy - more-than-700 of them signed an open letter threatening to leave unless the board resigned.

The letter stated that Microsoft had assured them that there were jobs for all OpenAI staff if they wanted to join the company, with Microsoft later confirming it would match their existing pay.

That threat now appears to have been seen off by Mr Altman's dramatic return.

But the upheaval of the past few days has raised questions about how a group of just four people could make decisions that have rocked a multi-billion dollar technology business.

In part this is because of OpenAI's unusual structure and purpose.

It began life in 2015 as a non-profit - many charities have that status - with the mission to create "safe artificial general intelligence that benefits all of humanity". Its objectives did not include looking after the interests of shareholders or maximising revenue.

In 2019 it added a for-profit subsidiary but its purpose remained unchanged and the not-for-profit's board remained in charge.

It's not clear whether tensions over the future direction of OpenAI contributed to this crisis or what commitments - if any - Mr Altman made to secure his return.

But many observers have called for greater clarity, with Tesla boss Elon Musk among those who have urged the board members to "say something".

But that has yet to happen. Reacting on X to the news of the reinstatement and new board, Ms Toner said no more than "and now, we all get some sleep".








__________________
PUTIN TRUMP & Netanyahu Will Meet in HELL


..................SHARKS are Closing in on TRUMP..........................







TRUMP WARNS; 'There'll Be a Bloodbath If I Don't Get Elected'..MAGA - MyAssGotArrested...IT's COMING


PLEASE HELP THIS SITE..Click DONATE
& Thanks to ALL Members of ... 1..

THIS SITE IS MORE THAN JUST WAREZ...& TO STOP SPAM-IF YOU WANT TO POST, YOUR FIRST POST MUST BE IN WELCOMES
Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 24-11-23, 06:22   #14
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies Re: Brit Invented AI: OpenAI Was Working on Model So Powerful It Could Threaten Human

OpenAI Was Working on Advanced Model So Powerful It Alarmed Staff & Could Threaten Humanity

Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altmans’ sacking


The Guardian 24 NOV 2023





OpenAI CEO and founder Sam Altman has been reinstated as boss.





OpenAI was reportedly working on an advanced system before Sam Altmans’ sacking that was so powerful it caused safety concerns among staff at the company.


The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.

The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back. Altman also had the support of OpenAI’s biggest investor, Microsoft.


Many experts are concerned that companies such as OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the existence of a maths-solving large language model (LLM) would be a breakthrough. He said: “The intrinsic ability of LLMs to do maths is a major step forward, allowing AIs to offer a whole new swathe of analytical capabilities.”







Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Old 03-12-23, 06:14   #15
 
Ladybbird's Avatar
 
Join Date: Feb 2011
Posts: 47,553
Thanks: 27,622
Thanked 14,458 Times in 10,262 Posts
Ladybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond reputeLadybbird has a reputation beyond repute

Awards Showcase
Best Admin Best Admin Gold Medal Gold Medal 
Total Awards: 8

Movies Re: Brit Invented AI: Slaughter Machines-Dangerous Rise of Military AI -LONG Read

‘Machines Set Loose to Slaughter’: The Dangerous Rise of Military AI

Autonomous machines capable of deadly force are increasingly prevalent in modern warfare, despite numerous ethical concerns. Is there anything we can do to halt the advance of the killer robots?

BBC 3 DEC 2023














The video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van’s back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave.

In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force.

The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away.


Terrorists could easily deploy them. And existing defences are weak or nonexistent.


Some military experts argued that Slaughterbots – which was made by the Future of Life Institute, an organisation researching existential threats to humanity – sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”.

One “microsystems collaborative” has already released Octoroach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.

Who knows how many other noxious creatures are now models for avant garde military theorists. A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.

Advances in the modelling of robotic killing machines are no less disturbing. A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines.

When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realise that such an experiment has the potential to go wildly out of control. Expense is the chief impediment to a great power experimenting with such potentially destructive machines. Software modelling may eliminate even that barrier, allowing virtual battle-tested simulations to inspire future military investments.

In the past, nation states have come together to prohibit particularly gruesome or terrifying new weapons. By the mid-20th century, international conventions banned biological and chemical weapons. The community of nations has forbidden the use of blinding-laser technology, too. A robust network of NGOs has successfully urged the UN to convene member states to agree to a similar ban on killer robots and other weapons that can act on their own, without direct human control, to destroy a target (also known as lethal autonomous weapon systems, or Laws).

And while there has been debate about the definition of such technology, we can all imagine some particularly terrifying kinds of weapons that all states should agree never to make or deploy. A drone that gradually heated enemy soldiers to death would violate international conventions against torture; sonic weapons designed to wreck an enemy’s hearing or balance should merit similar treatment. A country that designed and used such weapons should be exiled from the international community.

In the abstract, we can probably agree that ostracism – and more severe punishment – is also merited for the designers and users of killer robots. The very idea of a machine set loose to slaughter is chilling. And yet some of the world’s largest militaries seem to be creeping toward developing such weapons, by pursuing a logic of deterrence: they fear being crushed by rivals’ AI if they can’t unleash an equally potent force.

The key to solving such an intractable arms race may lie less in global treaties than in a cautionary rethinking of what martial AI may be used for. As “war comes home”, deployment of military-grade force within countries such as the US and China is a stark warning to their citizens: whatever technologies of control and destruction you allow your government to buy for use abroad now may well be used against you in the future.

Are killer robots as horrific as biological weapons? Not necessarily, argue some establishment military theorists and computer scientists. According to Michael Schmitt of the US Naval War College, military robots could police the skies to ensure that a slaughter like Saddam Hussein’s killing of Kurds and Marsh Arabs could not happen again.

Ronald Arkin of the Georgia Institute of Technology believes that autonomous weapon systems may “reduce man’s inhumanity to man through technology”, since a robot will not be subject to all-too-human fits of anger, sadism or cruelty. He has proposed taking humans out of the loop of decisions about targeting, while coding ethical constraints into robots.

Arkin has also developed target classification to protect sites such as hospitals and schools.

In theory, a preference for controlled machine violence rather than unpredictable human violence might seem reasonable. Massacres that take place during war often seem to be rooted in irrational emotion. Yet we often reserve our deepest condemnation not for violence done in the heat of passion, but for the premeditated murderer who coolly planned his attack.

The history of warfare offers many examples of more carefully planned massacres. And surely any robotic weapons system is likely to be designed with some kind of override feature, which would be controlled by human operators, subject to all the normal human passions and irrationality.

Any attempt to code law and ethics into killer robots raises enormous practical difficulties. Computer science professor Noel Sharkey has argued that it is impossible to programme a robot warrior with reactions to the infinite array of situations that could arise in the heat of conflict. Like an autonomous car rendered helpless by snow interfering with its sensors, an autonomous weapon system in the fog of war is dangerous.

Most soldiers would testify that the everyday experience of war is long stretches of boredom punctuated by sudden, terrifying spells of disorder. Standardising accounts of such incidents, in order to guide robotic weapons, might be impossible. Machine learning has worked best where there is a massive dataset with clearly understood examples of good and bad, right and wrong.

For example, credit card companies have improved fraud detection mechanisms with constant analyses of hundreds of millions of transactions, where false negatives and false positives are easily labelled with nearly 100% accuracy.

Would it be possible to “datafy” the experiences of soldiers in Iraq, deciding whether to fire at ambiguous enemies? Even if it were, how relevant would such a dataset be for occupations of, say, Sudan or Yemen (two of the many nations with some kind of US military presence)?








Given these difficulties, it is hard to avoid the conclusion that the idea of ethical robotic killing machines is unrealistic, and all too likely to support dangerous fantasies of pushbutton wars and guiltless slaughters


Slaughterbots




New Robot Makes Soldiers Obsolete



Ladybbird is online now  
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiTweet this Post!
Reply With Quote
Post New ThreadReply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
SEO by vBSEO 3.5.2
Designed by: vBSkinworks