Archive | DIGC202 RSS for this section

The Threat of Artificial Intelligence

What is the Internet of Things? I have been trying to answer this question for almost a year, as it is a term that is thrown around excessively in my media degree. At its most basic level, I understand that the Internet of Things relates to any tangible device with the ability to connect to the Internet, however the term refers to much more than this.

‘The Internet of Things (IoT) is a scenario in which objects, animals or people are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction’ (Rouse, 2014). It is the idea of taking the person out of the connection sequence that intrigues me most. Can technology, and advances in technology replace human intelligence?

The knowledge that technology and machines run off is often referred to as Artificial Intelligence (A.I). Primarily, Artificial Intelligence is the set rules in programme coding, which instructs and enables a software or hardware, to identify and sequence or pattern through detection, and then to respond according to a pre-programmed prescribed action (Shah, 2014). Artificial Intelligence aims to make machines think like humans, but with the complexity of the human brain, surely this is not possible.

While I am not an Artificial Intelligence guru, I will try my best to present you with a number of views surrounding it.

Machines Will Steal Your Job

This is perhaps the most popular perspective I have found upon exploring Artificial Intelligence. It appears as though many people fear losing their job to a piece of technology. This fear is not irrational, as history has proven this as a reality already. In the 1980’s mid-level draftsmen were replaced by software, in the 1800’s British textile artisans were replaced by mechanised looms, and countless cash register staff are now being replaced by self-serve counters.

Research conducted by Pew Research, interviewed over 2000 A.I experts, and found that while 52% are optimistic that Artificial Intelligence will grow to be a positive thing between now and 2025, the remaining 48% worried for the future. However, all agreed that ‘the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade’ (Hern, 2014).

Artificial Intelligence Cannot Replace Humans

The human brain thinks in a non-linear fashion, and can therefore deduce non-linear time and life. Harish Shah suggests that ‘technology was always with limits, and those limits are permanent‘ (2014). Long running cognitive research has shown that ‘cognitive consciousness requires a physical organic biological body’, something that technology simply lacks. While a computer can store more data, and make faster calculations, you cannot programme consciousness or intuition or spontaneity into any piece of technology, a limit that will forever differentiate the value of human intelligence, when compared to artificial intelligence.

Artificial and Human Intelligence Live in Harmony

This is the perspective that I align myself with. While I am aware that machines have and will always replace human jobs and roles, there are strong limitations with technology as suggested above. Where humans have emotion and sensitivity, technology does not, and these are things that cannot be taught. The displacement of work by robots, however, may not be entirely negative. Take for example military robots. These ‘“unmanned systems” are better suited than human soldiers for “dull, dirty or dangerous missions”‘ (Myers, 2009), and the introduction of them a few years ago has resolved the problem of fatigued crew members, and casualties from failed bomb defusions. Where risk and discomfort are eliminated for humans, I believe that technology has an obligation to replace these roles.

While the Internet of Things, and its growing popularity, threatens the jobs of many blue and white collar workers, it should be a thing explored and understood, rather than feared.

References

Botnets (and a few other absurdities)

When faced with the word, ‘botnet’, I had no idea of what it was, what it meant, or if it was going to intrigue me enough to write about it, regardless of this I set out to enlighten myself, and committed to writing about this mysterious word.

Turns out the term is actually a combination of the words ‘robot’ and ‘network’, which makes a lot of sense now that I think about it. Typically, bots are used by criminals who ‘distribute malicious software (also known as malware) that can turn your computer into a bot (also known as a zombie)’ (Microsoft, 2014). When this happens, these little bots can make your computer perform automated tasks on the Internet without you even knowing. When there are a number of infected computers, a network is formed, and in turn, the birth of a botnet. A botnet is also known as a zombie army, which sounds pretty cool (we were all thinking it), however they are far more dangerous than cool, and ‘according to a report from Russian-based Kaspersky Labs, botnets — not spam, viruses, or worms — currently pose the biggest threat to the Internet’ (Rouse, 2012).

The person who coordinates this sort of attack is referred to as the zombie master, and their motives are often based on desiring to cripple their competitors, or to make money. In order to do the first, the zombie master would configure a DDoS attack, whereby the botnet is programmed to redirect ‘transmissions to a specific computer, such as a Web site that can be closed down by having to handle too much traffic’ (Rouse, 2012). In order to achieve making money, the zombie master might send spam, or attempt to steal personal and private information including credit card numbers or bank credentials. Both means rely on having access to an unprotected computer, so make sure you’re firewall is updated and prepared for battle!

While botnets can be quite vicious, something I found quite comical was that amidst my attempts to learn about botnets, I came across a number of tutorials on how to create bots! It’s like having a free manual on how to rob a bank distributed outside the bank, it seemed absurd. Then it dawned on me, our entire world is becoming more and more absurd with every new piece of technology introduced.

Take for example the Internet of Things, a concept I will be exploring next week. Between 23 December 2013 and 6 January 2014, Proofpoint researchers detected a specific botnet that was aggressively mailing malicious spam three times a day. “A more detailed examination suggested that while the majority of mail was initiated by “expected” IoT [Internet of Things] devices such as compromised home-networking devices (routers, NAS), there was a significant percentage of attack mail coming from other non-traditional sources, such as connected multi-media centers, televisions and at least one refrigerator.”

A fridge was under the control of a zombie master, and was sending spam! What is this world coming to?

References:

Forever an Enigma

To save giving an extensive outline of what WikiLeaks is exactly perhaps you should watch this:

The eternal argument that exists is whether WikiLeaks is beneficial to society? Is complete transparency, in regards to the Government, international warfare, politics and a myriad of other things, going to bring positive effects for members of society?

This all leads to the point of transparency, and in regards to WikiLeaks, questions the benefit of the complete disclosure of all information. Mark Fenster addresses this in his report, ‘Disclosure’s Effects‘. He observes that the disclosure of information can have transformative effects, both negative and positive. ‘Disclosure can inform, enlighten, and energize the public, or it can create great harm or stymie government operations‘ (Fenster, 2011). While he offers a quite objective opinion on the impact of disclosure, Fenster does not agree that WikiLeaks fosters, or encourages transparency, rather that it threatens transparency.

In relation to the government, transparency does not actually mean total openness, with every card at hand shown publicly. Government transparency refers more to ‘demonstrating that decisions are fact-based and use complete, relevant data‘. With this in mind, transparency can promote ‘accountability and provide information for citizens about what their government is doing’.

WikiLeaks is viewed as the forefront of catalysing the creation of a completely transparent government structure in many nations, including Australia, however the kind of transparency that WikiLeaks aims for can be incredibly destructive, and looks more like a teenager spreading rumours, as opposed to it being a pathway to accountability. This is perhaps why I can not simply accept WikiLeaks as a knight in shining armour, here to enlighten the public sphere of all the dark secrets the government has kept locked away. The transparency that Fenster speaks of involves a two-sided balance, and WikiLeaks disturbs transparency’s balance.

The US government has long-relied on the ‘mosaic theory’ to excuse and justify withholding unclassified information from those who request it. According to this theory, ‘bits of unclassified and seemingly innocuous information may threaten national security when they are pieced together in a broad compilation or “mosaic”‘.

Alongside this view is that of writer, Jason Pontin, who argues ‘neither innovations, nor art, nor contracts, nor representative government, nor marriages, nor many other valuable things would exist without secrets’. This is a notion I agree with. While the word ‘secrets’ has a destructive stigma attached to it, I believe that secrets in one area create value in another, because if everyone knew everything, there would be no value, or power, in knowledge.

Although I seem to have taken a stance opposing WikiLeaks (something I was attempting not to do), the truth is that I feel as though the classified information that has been leaked in the past has caused nothing but angst and upset in the public sphere, and Assange even admits his intent ‘to induce fear and paranoia in … [the] leadership and planning coterie‘. Surely this is not a valid reason to upset the balance that the government are trying so hard to maintain.

It appears that WikiLeaks ‘seeks to advance an agenda of self-aggrandizement at the expense of U.S. interests, with reckless disregard for the consequences of its actions’ and ‘there is a difference between holding government accountable for its decisions and holding government officials hostage to their words’. Although, the real truth behind whether WikiLeaks is a positive platform for society will forever remain an enigma in my mind.

References:

Lights, Camera, Slacktion!

Slacktivism is a term I was initially introduced to in my first year of University. While nowadays it can be synonymous with ‘feel good activism’, Popova defines the term in a way that I’ve never seen matched; ‘the tendency to passively affiliate ourselves with causes for the sake of peer approval rather than taking real, high-stakes action to support them’. With the ease and immediacy of social media, almost any individual can participate in a revolution or protest, however it is with this ease of participating that slacktivism has truly flourished in Western societies.

Of course, there is no doubt that social media has transformed traditional activism and made way for a new era of revolution, just look at the #Euromaiden protests in Ukraine, or the Kony 2012 Campaign, however as magical as the Internet may seem, people have (as always) tainted its ingenious.

Nearly half of the worlds population live on $2.50 a day, with at least 25% of the worlds population living in extreme poverty. Most first world occupants don’t realise the extent of their fortune, ‘If you have money in the bank, in your wallet, and spare change in a dish someplace, you are among the top 8% of the world’s wealthy’. This is why we become culprits of slacktivism, because we not only neglect to realise how wealthy we really are, but we forget the reality of those who are in the remaining 92%. Some are homeless, some are sick, some are dying, while many are engaged in true activism.

In saying this, the Internet is now one of the few tools that enables us see, and even get to know these ‘unknown others’, so rather than debating that the Internet makes us lazy sloths, I will agree with Popova when he says that ‘online communities broaden our scope of empathy’.

A study conducted by Christopher Jones explores the success/failures of 3 major activism events that have occurred offline, with the help of online campaigns. His findings concluded that the ‘ability of the internet to revolutionize offline social and political action in a way that was never possible before’. The Internet provides a platform for communication between those who can physically engage in a protest or revolution and those who are on the other side of the world, searching for some way to help. The term slacktivism should not be associated with the Internet itself, rather with the Internet’s users. Yes, many people, if not the majority, use Twitter and Facebook to like and retweet posts that induce a sense of philanthropy into their lives, without forcing them to actually do anything of worth. However, those who have engaged in activism online have made a world of change, and it is for this reason that slacktivism should not be confused with online activism.

References

What is credibility?

Citizen journalism can be really empowering for the average consumer; it transforms us from being the once dormant audience into being active participants, who both view and create content, turning us into produsers. We are constantly collaborating with an entire network of other average citizens to create a news sphere that stretches beyond the means of traditional journalism. “Citizen journalism is discursive and deliberative, and better resembles a conversation than a lecture” (Gillmor, 2003). It isn’t bound by the constraints of authority or obligation, and so information can be published at a more rapid and personal rate, being delivered to our very own newsfeed or mobile phone at any second.

The BBC have taken the idea of Citizen Journalism and embraced it. They have, over the past 10 years, been undergoing a process of restructuring to enable full utilisation of the content and resources that its average viewer has to share.

BBC took note on July 7th 2005, when terrorists bombed a London subway, sending the whole nation into a state of hysteria. Prior to the onslaught of photographs, emails and SMS messages being broadcast across the web, the explosion was deemed as nothing more than a “power surge”. It wasn’t until the story was taken into the hands of innocent bystanders that the full truth was revealed. Richard Sambrook, a BBC employee speculated, ‘when major events occur, the public can offer us as much new information as we are able to broadcast to them. From now on, news coverage is a partnership.

However, this new notion of citizen journalism has caused skepticism in some individuals, as they question the true credibility of the content these citizens contribute. Often these people associate citizen journalism with unreliability, not because it’s found to be fraud or false, but because the content is rarely filtered or fact-checked, and it is unlike how traditional media functioned where ‘the journalistic production was controlled through the practice of gatekeeping: the ‘gates’ of the journalistic publication were considered sacrosanct, and served as filters for news items which were considered to be unimportant, uninteresting, or otherwise irrelevant for audiences‘ (Bruns, 2009). This skepticism is not wrong, however it may be slightly old-fashioned.

Credibility is no less relevant to citizen journalism than it was regarding traditional media prior to the Internet. Rather than the value of credibility disappearing, all that it entails has shifted to adapt to the particular media platform being explored. That is, users of new media ‘seem to apply different criteria to different media’ (Carroll, 2011). This implies that users apply a different set of standards to judge the reliability of new convergent media platforms, than they do to judge traditional media. Research by Carroll and Richardson found that consumers trusted a news source based on identification and affiliation. Therefore, there exists a “perceived sameness” (Carroll & Richardson, 2011) that allows the citizen journalist, for example a blogger, to sate the reader’s perception of credibility by sharing a common set of values and beliefs. In short, consumers find credibility in a communicator’s ability to be relatable.

References

Google becoming like Apple?

We all know that Apple is known for its closed nature, and the fact that its products cannot ‘be programmed by outsiders’ (Zittrain, 2010). We also know that Android is Apple’s counterpart, offering a completely open source platform, so that anybody can take the code and do what they like with it.

One thing we think we know is that Google is synonymous with Android. This is not exactly the reality. In 2007, Google launched its Android Open Source Project (ASOP), only months after the first iPhone was released. It was essentially an act of precaution and defence, as Google felt threatened by the success of Apple’s, very popular, smartphone. ‘Google decided to give Android away for free and use it as a trojan horse for Google services. The thinking went that if Google Search was one day locked out of the iPhone, people would stop using Google Search on the desktop’ (Amadeo, 2013).

Fast-track a few years on, Android now take up 40% of the market share, with the operating system predicted to have one billion users by the end of this year.

Google are now in a little dilemma. ‘If a company other than Google can come up with a way to make Android better than it is now, it would be able to build a serious competitor and possibly threaten Google’s smartphone dominance’ (Amadeo, 2013). While it was easy for Google to give away their Android code when they were sure they’d fail without doing so, the company is now processing ways in which it can protect its valuable project, without completely closing it off.

You’ll have already noticed that many of Google’s applications are not opened, such as Maps, Calendar and Drive. However, Google continues to close off it’s previous ASOP run applications by simply creating a better, closed alternative.

Amadeo brings to light and compares the different elements of ASOP that Google have dropped, and ceased to update with the proprietary Google Play apps. ‘While you can’t kill an open source app, you can turn it into abandonware by moving all continuing development to a closed source model‘ (2013).

For example, Google Play Music has replaced ASOP music:

Sourced http://www.wired.co.uk/news/archive/2013-10/21/googles-iron-grip-on-android 13/9/2014

This is becoming a trend for Google, and is quite cunning of them. While they can protect themselves from monsters who might take up the Android code and make something better of it, they are still ensuring that everything Android remains open source. They do so, simply by creating closed, proprietary applications that work more efficiently, look nicer, get upgraded and are just better in every way, so that people don’t want to use the ASOP applications anymore, in fact, people don’t even know the difference. This let’s Google have a bit more control over what the users do… sound familiar anyone?

It feels like Google are swaying towards the likes of Apple’s previous CEO, Steve Jobs, when he said, ‘you don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work any more’ (Zittrain, 2010).

Google make it look they are doing the users a favour, when really they’re looking out for number one.

References

A Shared Culture

We all want to live in a shared culture, but how do we do that when we’re not allowed to share anything?

Creative Commons is a ‘non-profit organization that provides copyright owners with free licences allowing them to share, reuse and remix their material, legally’ (Creative Commons, 2014). It essentially gives the creator control over how they want their work to be licensed, and lets them say ‘the world can use, remix or edit my stuff, as long as they attribute the original thing to me’.

This idea of a shared culture is much like Lessig’s ‘free culture’. ‘Free cultures are cultures that leave a great deal open for others to build upon; unfree, or permission, cultures leave much less. Ours was a free culture. It is becoming much less so’ (Lessig, 2004). It might be ignorant, or idealistic of me to think that these free or shared cultures are a real possibility, but I believe the world was designed for collaboration. An experiment conducted by Professor Alice Roberts, publicized on the BBC Two Horizon programme, showed that when working together on a task, human babies share out uneven rewards fairly. It is, therefore, instinctive, even as infantd, for people to want to communicate and cooperate with one another.

Creative Commons is one step closer to this reality of a shared culture. Teoder Mitew discusses the architecture of participation, suggesting that ‘the former consumers are now also the biggest producers of content’. This integration of production and consumption allow individuals to create and consume at the same time. They allow for a new level of creativity. If anybody can create content, everybody should create content, because the more creators there are contributing to the world, the more the collaboration process can evolve and succeed. ‘No one person, no one alliance, no one nation, no one of us is as smart as all of us thinking together’ (Stavridis, 2012). The world will not eventuate into much at all if we build up walls that stop dialogue.

More generally, order may remain when people see themselves as a part of a social system, a group of people—more than utter strangers but less than friends—with some overlap in outlook and goals. Whatever counts as a satisfying explanation, we see that sometimes the absence of law has not resulted in the absence of order. Under the right circumstances, people will behave charitably toward one another in the comparative absence or enforcement of rules that would otherwise compel that charity’ (Zittrain, 2008).

Admiral James Stavridis, the Supreme Commander of the North Atlantic Treaty Organisation, has the perception that while we search for security by building walls and isolating ourselves, we are actually losing security, suggesting a new paradigm of security being found in connection and collaberation with other people. He suggests that, ‘Instead of building walls for security, we need to build bridges’ (2012). Although laws and other legalities cause ‘walls’ to be foundational in something that seems justified such as security, the dialogue between members of society should be priority as it is what brings about a true shared culture, which would in turn create the most stable society.

My challenge is that we should stop looking inwards, forever being subdued by our arrogance which causes us be over defensive of everything we create, and rather look outwards into the world that we are living in, and attempt to transform it into a shared world, flooded with free cultures.

References