What is the Internet of Things? I have been trying to answer this question for almost a year, as it is a term that is thrown around excessively in my media degree. At its most basic level, I understand that the Internet of Things relates to any tangible device with the ability to connect to the Internet, however the term refers to much more than this.
‘The Internet of Things (IoT) is a scenario in which objects, animals or people are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction’ (Rouse, 2014). It is the idea of taking the person out of the connection sequence that intrigues me most. Can technology, and advances in technology replace human intelligence?
The knowledge that technology and machines run off is often referred to as Artificial Intelligence (A.I). Primarily, Artificial Intelligence is the set rules in programme coding, which instructs and enables a software or hardware, to identify and sequence or pattern through detection, and then to respond according to a pre-programmed prescribed action (Shah, 2014). Artificial Intelligence aims to make machines think like humans, but with the complexity of the human brain, surely this is not possible.
While I am not an Artificial Intelligence guru, I will try my best to present you with a number of views surrounding it.
Machines Will Steal Your Job
This is perhaps the most popular perspective I have found upon exploring Artificial Intelligence. It appears as though many people fear losing their job to a piece of technology. This fear is not irrational, as history has proven this as a reality already. In the 1980’s mid-level draftsmen were replaced by software, in the 1800’s British textile artisans were replaced by mechanised looms, and countless cash register staff are now being replaced by self-serve counters.
Research conducted by Pew Research, interviewed over 2000 A.I experts, and found that while 52% are optimistic that Artificial Intelligence will grow to be a positive thing between now and 2025, the remaining 48% worried for the future. However, all agreed that ‘the displacement of work by robots and AI is going to continue, and accelerate, over the coming decade’ (Hern, 2014).
Artificial Intelligence Cannot Replace Humans
The human brain thinks in a non-linear fashion, and can therefore deduce non-linear time and life. Harish Shah suggests that ‘technology was always with limits, and those limits are permanent‘ (2014). Long running cognitive research has shown that ‘cognitive consciousness requires a physical organic biological body’, something that technology simply lacks. While a computer can store more data, and make faster calculations, you cannot programme consciousness or intuition or spontaneity into any piece of technology, a limit that will forever differentiate the value of human intelligence, when compared to artificial intelligence.
Artificial and Human Intelligence Live in Harmony
This is the perspective that I align myself with. While I am aware that machines have and will always replace human jobs and roles, there are strong limitations with technology as suggested above. Where humans have emotion and sensitivity, technology does not, and these are things that cannot be taught. The displacement of work by robots, however, may not be entirely negative. Take for example military robots. These ‘“unmanned systems” are better suited than human soldiers for “dull, dirty or dangerous missions”‘ (Myers, 2009), and the introduction of them a few years ago has resolved the problem of fatigued crew members, and casualties from failed bomb defusions. Where risk and discomfort are eliminated for humans, I believe that technology has an obligation to replace these roles.
While the Internet of Things, and its growing popularity, threatens the jobs of many blue and white collar workers, it should be a thing explored and understood, rather than feared.
- Boyd Myers, C 2009, Will a Machine Replace You, Forbes, 22 June, viewed 23 October 2014, <http://www.forbes.com/2009/06/18/technology-obsolete-jobs-opinions-contributors-artificial-intelligence-09-myers.html>
- Burn-Callandar, R 2013, Artificial Intelligence ‘will take the place of humans within five years’, The Telegraph, 29 August, viewed 23 October 2014, <http://www.telegraph.co.uk/finance/businessclub/technology/10274420/Artificial-intelligence-will-take-the-place-of-humans-within-five-years.html>
- Hern, A 2014,Will robots take our jobs? Experts can’t decided‘, The Guardian, 7 August, viewed 23 October 2014, <http://www.theguardian.com/technology/2014/aug/06/robots-jobs-artificial-intelligence-pew>
- Rouse, M 2014,Internet of Things (Iot), WhatIs, June, viewed 23 October 2014, <http://whatis.techtarget.com/definition/Internet-of-Things>
- Shah, H 2014, Why Artificial Intelligence will not replace a human futurist, India Future Society, 20 April, viewed 23 October 2014, <http://indiafuturesociety.org/artificial-intelligence-will-replace-human-futurist/>
- Yee, H 2012,Can Technology Replace Human Intelligence, Ted Conversations Archives, viewed 23 October 2014, <http://www.ted.com/conversations/9837/can_technology_replace_human_i.html>
When faced with the word, ‘botnet’, I had no idea of what it was, what it meant, or if it was going to intrigue me enough to write about it, regardless of this I set out to enlighten myself, and committed to writing about this mysterious word.
Turns out the term is actually a combination of the words ‘robot’ and ‘network’, which makes a lot of sense now that I think about it. Typically, bots are used by criminals who ‘distribute malicious software (also known as malware) that can turn your computer into a bot (also known as a zombie)’ (Microsoft, 2014). When this happens, these little bots can make your computer perform automated tasks on the Internet without you even knowing. When there are a number of infected computers, a network is formed, and in turn, the birth of a botnet. A botnet is also known as a zombie army, which sounds pretty cool (we were all thinking it), however they are far more dangerous than cool, and ‘according to a report from Russian-based Kaspersky Labs, botnets — not spam, viruses, or worms — currently pose the biggest threat to the Internet’ (Rouse, 2012).
The person who coordinates this sort of attack is referred to as the zombie master, and their motives are often based on desiring to cripple their competitors, or to make money. In order to do the first, the zombie master would configure a DDoS attack, whereby the botnet is programmed to redirect ‘transmissions to a specific computer, such as a Web site that can be closed down by having to handle too much traffic’ (Rouse, 2012). In order to achieve making money, the zombie master might send spam, or attempt to steal personal and private information including credit card numbers or bank credentials. Both means rely on having access to an unprotected computer, so make sure you’re firewall is updated and prepared for battle!
While botnets can be quite vicious, something I found quite comical was that amidst my attempts to learn about botnets, I came across a number of tutorials on how to create bots! It’s like having a free manual on how to rob a bank distributed outside the bank, it seemed absurd. Then it dawned on me, our entire world is becoming more and more absurd with every new piece of technology introduced.
Take for example the Internet of Things, a concept I will be exploring next week. Between 23 December 2013 and 6 January 2014, Proofpoint researchers detected a specific botnet that was aggressively mailing malicious spam three times a day. “A more detailed examination suggested that while the majority of mail was initiated by “expected” IoT [Internet of Things] devices such as compromised home-networking devices (routers, NAS), there was a significant percentage of attack mail coming from other non-traditional sources, such as connected multi-media centers, televisions and at least one refrigerator.”
A fridge was under the control of a zombie master, and was sending spam! What is this world coming to?
- 2014, What is a botnet?, Microsoft, viewed 16 October, <http://www.microsoft.com/security/resources/botnet-whatis.aspx>
- Constantin, L 2014, Botnet brute-forces remote access to point-of-sale systems, PCWorld, 9 July, viewed 16 October 2014, <http://www.pcworld.com/article/2452340/botnet-bruteforces-remote-access-to-pointofsale-systems.html>
- Kassner, M 2014, Internet of Things botnet may include TVs and a fridge, Tech Republic, 21 January, viewed 16 October 2014, <http://www.techrepublic.com/blog/it-security/internet-of-things-botnet-may-include-tvs-and-a-fridge/>
- Rouse, M 2012, Botnet (Zombie army), TechTarget, February, viewed 16 October 2014, <http://searchsecurity.techtarget.com/definition/botnet>
- Verton, D 2014, The War on botnets Evolves, Fedscoop, 22 October, viewed 16 October 2014, <http://fedscoop.com/war-botnets-evolves/>
To save giving an extensive outline of what WikiLeaks is exactly perhaps you should watch this:
The eternal argument that exists is whether WikiLeaks is beneficial to society? Is complete transparency, in regards to the Government, international warfare, politics and a myriad of other things, going to bring positive effects for members of society?
This all leads to the point of transparency, and in regards to WikiLeaks, questions the benefit of the complete disclosure of all information. Mark Fenster addresses this in his report, ‘Disclosure’s Effects‘. He observes that the disclosure of information can have transformative effects, both negative and positive. ‘Disclosure can inform, enlighten, and energize the public, or it can create great harm or stymie government operations‘ (Fenster, 2011). While he offers a quite objective opinion on the impact of disclosure, Fenster does not agree that WikiLeaks fosters, or encourages transparency, rather that it threatens transparency.
In relation to the government, transparency does not actually mean total openness, with every card at hand shown publicly. Government transparency refers more to ‘demonstrating that decisions are fact-based and use complete, relevant data‘. With this in mind, transparency can promote ‘accountability and provide information for citizens about what their government is doing’.
WikiLeaks is viewed as the forefront of catalysing the creation of a completely transparent government structure in many nations, including Australia, however the kind of transparency that WikiLeaks aims for can be incredibly destructive, and looks more like a teenager spreading rumours, as opposed to it being a pathway to accountability. This is perhaps why I can not simply accept WikiLeaks as a knight in shining armour, here to enlighten the public sphere of all the dark secrets the government has kept locked away. The transparency that Fenster speaks of involves a two-sided balance, and WikiLeaks disturbs transparency’s balance.
The US government has long-relied on the ‘mosaic theory’ to excuse and justify withholding unclassified information from those who request it. According to this theory, ‘bits of unclassified and seemingly innocuous information may threaten national security when they are pieced together in a broad compilation or “mosaic”‘.
Alongside this view is that of writer, Jason Pontin, who argues ‘neither innovations, nor art, nor contracts, nor representative government, nor marriages, nor many other valuable things would exist without secrets’. This is a notion I agree with. While the word ‘secrets’ has a destructive stigma attached to it, I believe that secrets in one area create value in another, because if everyone knew everything, there would be no value, or power, in knowledge.
Although I seem to have taken a stance opposing WikiLeaks (something I was attempting not to do), the truth is that I feel as though the classified information that has been leaked in the past has caused nothing but angst and upset in the public sphere, and Assange even admits his intent ‘to induce fear and paranoia in … [the] leadership and planning coterie‘. Surely this is not a valid reason to upset the balance that the government are trying so hard to maintain.
It appears that WikiLeaks ‘seeks to advance an agenda of self-aggrandizement at the expense of U.S. interests, with reckless disregard for the consequences of its actions’ and ‘there is a difference between holding government accountable for its decisions and holding government officials hostage to their words’. Although, the real truth behind whether WikiLeaks is a positive platform for society will forever remain an enigma in my mind.
- Ashong, D 2011, ‘The Truth About Transparency – Why WikiLeaks is Bad For All of Us’, Huffington Post, 29 November, viewed 9 October 2014, <http://www.huffingtonpost.com/derrick-ashong/the-truth-about-transpare_b_789196.html>
- Fenster, M 2011, ‘Disclosure’s Effects: WikiLeaks and Transparency’, Iowa Law Review, vol. 97, viewed 9 October 2014, <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1797945>
- Pontin, J 2011, ‘Is WikiLeaks a Good Thing?’, MIT Technology Review, 22 February, viewed 9 October 2014, <http://www.technologyreview.com/fromtheeditor/422871/is-wikileaks-a-good-thing/>
- Weismann, A 2010, ‘WikiLeaks Damages Hopes for a Transparent Government’, Huffington Post, 9 September, viewed 9 October 2014, <http://www.huffingtonpost.com/anne-l-weismann/wikileaks-damages-hopes-f_b_794312.html>
Citizen journalism can be really empowering for the average consumer; it transforms us from being the once dormant audience into being active participants, who both view and create content, turning us into produsers. We are constantly collaborating with an entire network of other average citizens to create a news sphere that stretches beyond the means of traditional journalism. “Citizen journalism is discursive and deliberative, and better resembles a conversation than a lecture” (Gillmor, 2003). It isn’t bound by the constraints of authority or obligation, and so information can be published at a more rapid and personal rate, being delivered to our very own newsfeed or mobile phone at any second.
The BBC have taken the idea of Citizen Journalism and embraced it. They have, over the past 10 years, been undergoing a process of restructuring to enable full utilisation of the content and resources that its average viewer has to share.
BBC took note on July 7th 2005, when terrorists bombed a London subway, sending the whole nation into a state of hysteria. Prior to the onslaught of photographs, emails and SMS messages being broadcast across the web, the explosion was deemed as nothing more than a “power surge”. It wasn’t until the story was taken into the hands of innocent bystanders that the full truth was revealed. Richard Sambrook, a BBC employee speculated, ‘when major events occur, the public can offer us as much new information as we are able to broadcast to them. From now on, news coverage is a partnership.‘
However, this new notion of citizen journalism has caused skepticism in some individuals, as they question the true credibility of the content these citizens contribute. Often these people associate citizen journalism with unreliability, not because it’s found to be fraud or false, but because the content is rarely filtered or fact-checked, and it is unlike how traditional media functioned where ‘the journalistic production was controlled through the practice of gatekeeping: the ‘gates’ of the journalistic publication were considered sacrosanct, and served as filters for news items which were considered to be unimportant, uninteresting, or otherwise irrelevant for audiences‘ (Bruns, 2009). This skepticism is not wrong, however it may be slightly old-fashioned.
Credibility is no less relevant to citizen journalism than it was regarding traditional media prior to the Internet. Rather than the value of credibility disappearing, all that it entails has shifted to adapt to the particular media platform being explored. That is, users of new media ‘seem to apply different criteria to different media’ (Carroll, 2011). This implies that users apply a different set of standards to judge the reliability of new convergent media platforms, than they do to judge traditional media. Research by Carroll and Richardson found that consumers trusted a news source based on identification and affiliation. Therefore, there exists a “perceived sameness” (Carroll & Richardson, 2011) that allows the citizen journalist, for example a blogger, to sate the reader’s perception of credibility by sharing a common set of values and beliefs. In short, consumers find credibility in a communicator’s ability to be relatable.
- Bruns, A 2007, ‘Produsage: Towards a Broader Framework for User-Led Content Creation’, Proceedings Creativity & Cognition 6, viewed 18 September 2014, http://eprints.qut.edu.au/6623/1/6623.pdf
- Bruns, A 2009, ‘News Blogs and Citizen Journalism: New Directions for e-Journalism’, viewed 18 September 2014, http://snurb.info/files/News%20Blogs%20and%20Citizen%20Journalism.pdf
- Carroll, B & Richardson, R 2011, ‘Identification, Transparency, Interactivity; Towards a New Paradigm for Credibility for Single-Voice Blogs’, Writing for Digital Media, New York, NY: Routledge
- Sambrook, R 2005, ‘Citizen Journalism and the BBC’, Nieman Reports, 15 December, viewed 18 September 2014, http://niemanreports.org/articles/citizen-journalism-and-the-bbc/
We all know that Apple is known for its closed nature, and the fact that its products cannot ‘be programmed by outsiders’ (Zittrain, 2010). We also know that Android is Apple’s counterpart, offering a completely open source platform, so that anybody can take the code and do what they like with it.
One thing we think we know is that Google is synonymous with Android. This is not exactly the reality. In 2007, Google launched its Android Open Source Project (ASOP), only months after the first iPhone was released. It was essentially an act of precaution and defence, as Google felt threatened by the success of Apple’s, very popular, smartphone. ‘Google decided to give Android away for free and use it as a trojan horse for Google services. The thinking went that if Google Search was one day locked out of the iPhone, people would stop using Google Search on the desktop’ (Amadeo, 2013).
Fast-track a few years on, Android now take up 40% of the market share, with the operating system predicted to have one billion users by the end of this year.
Google are now in a little dilemma. ‘If a company other than Google can come up with a way to make Android better than it is now, it would be able to build a serious competitor and possibly threaten Google’s smartphone dominance’ (Amadeo, 2013). While it was easy for Google to give away their Android code when they were sure they’d fail without doing so, the company is now processing ways in which it can protect its valuable project, without completely closing it off.
You’ll have already noticed that many of Google’s applications are not opened, such as Maps, Calendar and Drive. However, Google continues to close off it’s previous ASOP run applications by simply creating a better, closed alternative.
Amadeo brings to light and compares the different elements of ASOP that Google have dropped, and ceased to update with the proprietary Google Play apps. ‘While you can’t kill an open source app, you can turn it into abandonware by moving all continuing development to a closed source model‘ (2013).
For example, Google Play Music has replaced ASOP music:
This is becoming a trend for Google, and is quite cunning of them. While they can protect themselves from monsters who might take up the Android code and make something better of it, they are still ensuring that everything Android remains open source. They do so, simply by creating closed, proprietary applications that work more efficiently, look nicer, get upgraded and are just better in every way, so that people don’t want to use the ASOP applications anymore, in fact, people don’t even know the difference. This let’s Google have a bit more control over what the users do… sound familiar anyone?
It feels like Google are swaying towards the likes of Apple’s previous CEO, Steve Jobs, when he said, ‘you don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work any more’ (Zittrain, 2010).
Google make it look they are doing the users a favour, when really they’re looking out for number one.
- Amadeo, R 2013, ‘Google’s Iron Grip on Android’, Wired, 21 October, viewed 13 September 2014, http://www.wired.co.uk/news/archive/2013-10/21/googles-iron-grip-on-android
- Gartner 2014, Gartner says annual smartphone sales surpassed sales of feature phones for the first time in 2013, Gartner, viewed 13 September 2014, http://www.gartner.com/newsroom/id/2665715
- Vaughan-Nichols, S 2014, ‘Debunking four myths about Android, Google, and open-source’, ZDNet, 18 February, viewed 13 September 2014, http://www.zdnet.com/debunking-four-myths-about-android-google-and-open-source-7000026473/
- Zittrain, J 2010, ‘A fight over freedom at Apple’s core’, Financial Times, February 3, viewed 13 September, http://www.ft.com/intl/cms/s/2/fcabc720-10fb-11df-9a9e-00144feab49a.html#axzz3DLOBQEFZ
We all want to live in a shared culture, but how do we do that when we’re not allowed to share anything?
Creative Commons is a ‘non-profit organization that provides copyright owners with free licences allowing them to share, reuse and remix their material, legally’ (Creative Commons, 2014). It essentially gives the creator control over how they want their work to be licensed, and lets them say ‘the world can use, remix or edit my stuff, as long as they attribute the original thing to me’.
This idea of a shared culture is much like Lessig’s ‘free culture’. ‘Free cultures are cultures that leave a great deal open for others to build upon; unfree, or permission, cultures leave much less. Ours was a free culture. It is becoming much less so’ (Lessig, 2004). It might be ignorant, or idealistic of me to think that these free or shared cultures are a real possibility, but I believe the world was designed for collaboration. An experiment conducted by Professor Alice Roberts, publicized on the BBC Two Horizon programme, showed that when working together on a task, human babies share out uneven rewards fairly. It is, therefore, instinctive, even as infantd, for people to want to communicate and cooperate with one another.
Creative Commons is one step closer to this reality of a shared culture. Teoder Mitew discusses the architecture of participation, suggesting that ‘the former consumers are now also the biggest producers of content’. This integration of production and consumption allow individuals to create and consume at the same time. They allow for a new level of creativity. If anybody can create content, everybody should create content, because the more creators there are contributing to the world, the more the collaboration process can evolve and succeed. ‘No one person, no one alliance, no one nation, no one of us is as smart as all of us thinking together’ (Stavridis, 2012). The world will not eventuate into much at all if we build up walls that stop dialogue.
‘More generally, order may remain when people see themselves as a part of a social system, a group of people—more than utter strangers but less than friends—with some overlap in outlook and goals. Whatever counts as a satisfying explanation, we see that sometimes the absence of law has not resulted in the absence of order. Under the right circumstances, people will behave charitably toward one another in the comparative absence or enforcement of rules that would otherwise compel that charity’ (Zittrain, 2008).
Admiral James Stavridis, the Supreme Commander of the North Atlantic Treaty Organisation, has the perception that while we search for security by building walls and isolating ourselves, we are actually losing security, suggesting a new paradigm of security being found in connection and collaberation with other people. He suggests that, ‘Instead of building walls for security, we need to build bridges’ (2012). Although laws and other legalities cause ‘walls’ to be foundational in something that seems justified such as security, the dialogue between members of society should be priority as it is what brings about a true shared culture, which would in turn create the most stable society.
My challenge is that we should stop looking inwards, forever being subdued by our arrogance which causes us be over defensive of everything we create, and rather look outwards into the world that we are living in, and attempt to transform it into a shared world, flooded with free cultures.
- BBC 2012,What Makes Us Human?, online video, BBC Two Horizon, viewed 6 September 2014, <http://www.bbc.co.uk/programmes/b036mrrj>
- Creative Commons, 2014, ‘Learn More’, Creative Commons Australia, viewed 6 September 2014,<http://creativecommons.org.au/learn/>
- Dylan, J 2008,A Shared Culture, online video, 4 January, Creative Commons, viewed 6 September 2014, <http://creativecommons.org/videos/a-shared-culture>
- Lessig, L. (2004). Creators. In Free Culture: How Big Media uses Technology and the Law to Lock Down Culture and Strangle Creativity
- Stavridis, J 2012,A Navy Admiral’s thoughts on global security, online video, June, TED Talks, viewed 6 September 2014, <http://www.ted.com/talks/james_stavridis_how_nato_s_supreme_commander_thinks_about_global_security>
- Zittrain, J. (2008). ‘The Lessons of Wikipedia’, in The Future of the Internet and How to Stop It (pp. 127-148). New Haven, CT: Yale University Press