Week 9 Reflection

This week’s articles are both case studies on users to see how easily they click on malware warnings. Article one:  Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness by Devdatta Akhawe and Article two: Your Attention Please by Cristian Bravo-Lillo, el al.  Article one focused on the percentage of “click throughs” each web browser, i.e. Google Chrome or Mozilla Firefox, had because each browser has different methods of warning their users.  While article two focused on that as a starting point, with the pop-up warning, and designed experiments to see if different types of pop-ups would be more effective. 

Article one’s goal for their case study was see “a 0% clickthrough rate all SSL warnings: users should heed all valid warnings, and the browser should minimize the number of false positives” (Akhawe, 259).  However, as an outsider to the computer cyber security and a lay user when it comes to computers, I do not think you can every have a 0% rate of click throughs because each person using the internet has their own set of biases and they either ignore the pop-up warning or they abide by it.  People are not machines, we make our own decisions on the spot, computers will make the decision they are programmed to make. 

Article two’s approach for click throughs, I think, is more approachable, on the user’s end at least.  Their experiments were to give different prompts for the user to go through.  Because each web browser has their prompts set up differently, each user tends to see different prompts.  Should all browsers unify, to make it easier on the user? So they will not just pass over the prompt because they are irritated with all the different types of prompts?

Week 8 Reflection

This week’s two readings revolve around search engine use and the way users and others interact with them. Article one is Data Voids: Where Missing data can easily be exploited by Michael Golebiewski and Article two is Data Craft: The Manipulation of Social Media Metadata by Amelia Acker.

This week is about manipulation and how one can benefit from this act. People use different situations within the world like breaking news or they outdated information and concepts to latch on to – as spoken about in article one. Data voids is a new term for me (much like every other week – I am constantly looking up things on Google – which coincidentally, is a major subject this week.) But the term, as I can understand it, is a void of information on the internet. So when someone uses Google or Bing and the subject they are wanting to find is obscure and does not return very many results, then the data void happens. At this point, in some situations, such as a breaking news story on a subject that previously did not have many return hits, but all of a sudden does because people are reporting on it, then media manipulators take notice. I am not entirely sure if media manipulators are even human, I think they are, but I did not glean from the article if they were bots or human. But anyways – “Unfortunately, the time between the first report and the creation of massive news content is when manipulators have the largest opportunity to capture attention” (Golebiewski, 20). Media manipulators can manipulate the data voids to cause problematic content or even cause serious harm.  Is this where we get fake news from?

Something occurred to me as I was reading this week as well – Social Media sites, such as YouTube or Twitter, have their own search engines within the website. I have never really realized this (I mean, I know I can search within the site), but knowing they have their own sophisticated search engine has made me wonder what other types of websites have their own search engines. Or what type of site would need one?

Week 7 Reflection

This week’s readings, Sorting Things Out: Classification and Its Consequences [1]by Geoffrey C Bowker and Susan Leigh Star; Experimentation in humanitarian locations: UNHCR and biometric registration of Afghan refugees [2] by Katja Lindskov Jacobsen; and Body, biometrics and identity [3] by Emilio Mordini and Sonia Massari; all revolve around the use of technology to keep track of human characteristics.  I feel like this topic of tracking humans through the use of biometrics is the most humanitarian subject we have looked at in this course because it deals with racial typing; installing technology to make choices instead of humans making that call.  For example, in archive theory courses, we talked about how the human is still designing the programs and technology, therefore typing in their preconceived biases into the program, to track people based on their race, or other characteristic.

Article 1 took a look at the apartheid history of classifying the human to keep track of their population in South Africa. The authors make sure to point out the race classification of human beings allows for racism and through the use of a government ran classification program allowed for decades of it. Article 2 looked at the UNHCR’s use of iris detection as a way to register refugees. Article 3 focused on biometrics as a whole; stating the human body is becoming your way of being identified.

Week 5 Reflection

This week’s readings were about trust and the Internet, which is hilarious because I am pouring out my inner thoughts about what the authors said about the topic on the “Internet trust.” Article 1 is Surveillance & Society: Debate | Networked Privacy by Danah Boyd; Article 2 is Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena by Paul Dourish and Ken Anderson; Article 3 is Online Trust, Trustworthiness, or Assurance? by Coye Cheshire.

Boyd’s article is very short and grapples with our networks and how they are multiple layers; all connected, made up of data. But within this data, which is in the network, we have the control to keep our data save. But do we? Boyd also brings up how people incoherently give out information on the internet that effects their family or even future family members. Such as giving your DNA to the DNA companies to find out your ancestry. Your DNA is forever in their database and traceable and networked.

Article 2, Dourish and Anderson, are trying to have their readers look at privacy and security in a well-rounded or all-inclusive view. They look at the two concepts separately, stating, “privacy, then, is generally approached as a social consideration, whereas security is seen as a technical concern. The relation between them is that security technologies might provide mechanisms by which privacy can be ensured” (Dourish and Anderson, 322).

Article 3 deals with trust within human to human interaction and human to system interaction; and how trustworthiness is built over time. Cheshire grapples with the idea that humans trust interactions with each other over relationships built over time. But the risk is different when we are on the Internet because we use it for a vast amount of different things, such as business and fun. So when the system fails, we feel betrayed by it (computer), even though it is a computer and they really do not know they are doing it because they are an object (Cheshire,55-56). But then who are we supposed to blame? The person who programmed it? Or the person at the other end of the interaction of the Internet? Or something or someone else?

Reflection Week 4

This week made us think about the Internet in how open it is for the users. The articles were Degrees of Freedom, Dimensions of Power, by Yochai Benkler, (1); The Contingent Internet, by David D Clark, (2); and Internet Tussles – A Framework fro Analyzing Heterogeneous Networks, by Dustin O’Hara, (3).

In article 1, Yochai made the reader think about how apps produced for the free market were causing phones and tablets to be used far more than desktops by the 2010s. But what I find interesting is when Yochai brings up the phone networks control over the phone users, but then phone users could use wifi on their phone and the phone networks had no control over it. If I understand this correctly. I think he is saying the phone networks are centralized power, while the internet is non-centralized?

The internet is a jack of all trades network because it can connect the user to some many different applications and is designed for a variety of uses. Clark states on page 10, “computers are general-purpose devices; since the Internet hooks computers together, it too ought to be general.” Following up with saying the Internet was designed by communications engineers who worked for telephone companies that did not know what the Internet was for. “The engineers from the world of telephone systems were confounded by the task of designing a system without knowing what its requirements were” (Clark, 10). Clark follows up later, stating this generality has a price. Because it is not perfect for any one particular thing. I think he is saying, the Internet is just “okay” for everything it serves a purpose for and designers have molded it into what it is today.

Article 3 focuses on Internet Tussles, conceptualized by David Clark, and builds upon the actor-network theory. Side note – by reading O’Hara’s article, it occurred to me (and I should have known because I am old enough) that the Internet was originally ran through telephone lines. And it was done simply because they could not afford to run their own network at the time.

O’Hara brings up the phrase “end-to-end,” stating it was the “design of the internet not only meant the internet was open for new people to join, it meant it was pen for them to build upon”(O’Hara, 2). It is much the same point I brought up in Clark’s article in the previous paragraphs. Where the internet is so open, anyone can add anything to it, without restrictions. Is this how we have such things as the dark net? And illegal website? Because of the openness of it?

Week 3 Reflections

This week’s theme revolved around ethnography and how it relates to the computer science world. Each article takes a different approach to ethnography, and while reading the pieces, I noticed ethnography has a loose definition and can be twisted and molded to fit the topic one is talking about. The articles this week are – Implications for Design (1) by Paul Dourish, The Ethnography of Infrastructure (2) by Susan Leigh Star, and The Field Guide to Human-Centered Design (3).

Article 1 is has technical jargon and difficult for this archivist in training to get through. But from my understanding, Dourish is trying to tie HCI (human centered interaction) and its adoption to ethnographic infrastructure. He draws on three explorations of problems of ethnography and design in different contexts to help his argument in this paper – “Anderson’s exploration of the issue of ethnography and requirements, Ackerman’s reflections on the social-technical gap, Button’s comparison between different models for ethnographic analysis, and Suchman’s account of forms of ethnographic encounter between technologists and customers.”

Article 2 uses a the telephone book as a metaphor to ethnography and to ask the methodological questions about infrastructure. Star focuses on infrastructure, and the ethnography of it. Honing in on infrastructure as a concept, making the reader think differently about the “boring” parts of computer science. Honestly, she uses too many metaphors and I feel her argument gets lost in them.

Article 3 is a book and almost 200 pages long. But it is broken up into sections – Mindsets, Methods, Ideation and Implementation. With this in mind, the reader knows the it is a field guide to help people better understand the technology they are working with. What resonated with me was the authors’ to talk about failure and how it is apart of the HCI world. By failing, one learns about the design and can make it better in the next round. The field guide ties human emotions to technology; which is smart due to the fact it is dealing with HCI concepts.

Overall, the readings were trying to tie humans to the technology they use and then how the designers could make it better.

Week Two Reflections

This week’s readings focused on the introduction to usable security through the design of it, both big picture and specific examples, such as password protection. Each of the three articles – (1) On actor-network theory. A few clarifications plus more than a few complications, (2) A Brief Introduction to Usable Security and, (3) DesignX: Complex Sociotechnical Systems. Each take a different direction in speaking about design of usable security, some conveying easier it to newcomers than the others.

Article 1, by Bruno Latour, focuses on the computer network and how available it is in today’s society. “Nothing is more intensely connected, more distant, more compulsory and more strategically organized than a computer network” (Latour, 2). Latour uses graphs and examples to show that networks are more connected than people themselves, I think. This notion is very dystopic sounding because it forces the reader to take a look at how far the computer network has come since its’ infancy. How far we can connected to the digital world.

Article 2, by Bryan D Payne and Keith Edwards, is a more of an introductory article, speaking in more laymen’s terms and taking the time to explain the usable security world. They use, what they think are two significant areas of usable security (password authentication and email encryption), to explain how it affects the everyday person. They go through the history of each, both starting in the early 1980s. Logically, I know computers have been around for most of the twentieth century until now, I always think the widespread use of them started when the world wide web became a hit in the mid-1990s. So to see these two functions have a longer history surprised me, when in reality they should not have.

Article 3, by Donald A Norman, focuses on design as well, but through the eyes of sociotechnical constructs, such as healthcare and transportation. Norman talks about these issues to make a bridge between them and the designers. They are real world examples and give substance to his argument. I think he is wondering if the design techniques taught in school are up to the task of these systems already put into place by society.

All three of these articles were full of technical jargon and I had a hard time sifting through to the core argument of each. However, they were meant to be read by peers within the field, so it is understandable.