Cool CGI Animation I want to share - about a post-apocalyptic world.

Avatar image for monkeyking1969
monkeyking1969

9098

Forum Posts

1241

Wiki Points

0

Followers

Reviews: 0

User Lists: 18

I think the fear of autonomous drones in warfare is a bit overblown. Just like people the AI minds we will create will have their own flaws. Nothing ever works 100% of the time and just as a stubbed toe can take a Navy SEAL out; there will be gaps in AIs what leave them vulnerable even to little minds like our own. People would find it VERY HARD to make something as infallible as the fictional Skynet from Terminator.

A digital mind will be no more unbeatable than the US is to Vietnam or Afghanistan. Having the furthest vision, the biggest ears, and the fasted wits; doesn't mean you always win. And, nothing we can make can be hardy and self-repairable that such a weapons would outlive us long. But....

No Caption Provided

With that said I love stories that look at the concept of AI in war. The "boogeyman" of an anonymous war machine is interesting, even if implausible in reality. I think such stories are a load of fun. So when I saw the two video below I was blown away, what neat set of stories that are really well told and very intriguing. The vehicles designs and animation in these two short films is amazing. These videos are over six year old, but still wonderful view of a post-apocalyptic world.


See the video below in the links. Cool stuff. Dima Fedotov is very talented!



Sysposis for "Fortress": Despite the fact that most of mankind has been killed off, the war still continues because of an automated system left by people.One of the last surviving bombers still performs it"s task to destroy a city long since dead.
CGI Animated Shorts : "Fortress/Крепость" - by Dima Fedotov



Sysposis for "Last Day of the War": The automatic base machines begin fueling and charging the weapons of last surviving bomber (form the first video) preparing it to drop bombs on the dead enemy city.
CGI 3D Animated Shorts : "LAST DAY OF WAR" - by Dima Fedotov

Avatar image for monkeyking1969
monkeyking1969

9098

Forum Posts

1241

Wiki Points

0

Followers

Reviews: 0

User Lists: 18

What I like about the above two videos is they have a very limited amount of dialogue, most of it computerized, so they don't screw it up with illogical dialogue or contrary ideas. This is good because because it is limited every single line of dialogue has a purpose and is there for a reason; moreover, that means the whole thing is easier to tranel;te into English, Spanish, French, Japanese, or Mongolian.

Avatar image for bladeofcreation
BladeOfCreation

2491

Forum Posts

27

Wiki Points

0

Followers

Reviews: 1

User Lists: 3

The worry about fully autonomous lethal systems is certainly not helped by headlines that talk about a drone "with a mind of its own" and "AI capable of making its own decisions" which is just...not how it works. There are still-unconfirmed reports that an AI-guided drone identified and engaged a target (obviously based on programmed data) last year without an explicit command to engage that specific target, which would be a first.

Avatar image for merxworx01
MerxWorx01

1231

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

Eh, we have AI that determined in a real world setting that an employee was not performing as they should and quietly revoked their access to their computer and building access and proceeded to begin the termination process. The worry about AI is a real thing since companies are deciding that AI that make slightly fewer mistakes than a person is acceptable to operate autonomously, Car companies are pushing ever harder to put commercial zero occupant commercial vehicles in the road, despite clear issues with full control systems. Autonomous drivers are getting better everyday until you hear about one driving through a box truck.

I think people have good reason to be worried. Everytime you walk past a police officer with a small square object on their waist or chest, your face is actively being analyzed by AI whose R&D from one of the major tech company or another has been found to be less than trust worthy. AI whose parameters can be and are altered by the client(PDs) to get more false positives than normal will lead to more false arrests has been deployed for years. I don't think we should be worried about AI thinking for itself just yet but we should be worried about AI that's "trained" poorly or maliciously, deployed as safe and then operating in ways that could be unpredictable and devastating or worse; operating as expected.