Ever since I got my Sinology Diskstation, I’ve been meaning to set up Plex, a home media server solution. I got round to setting Plex up months ago, but for some reason I could never figure out why I couldn’t connect and browse my media library when I was connected to Plex through VPN. Plex’s Remote Access feature has always confused me too – it requires the user to log into their Plex account and pay for a subscription. What I couldn’t understand was, why would Plex—even though I was connected to my VPN—show me no media, yet when I was at home, it would?
As iCloud Photo Library is now in full operation, I wondered if it was possible to somehow back up my whole iCloud Photo Library. I had a look at the settings of the Photos app on OS X and found the that there’s an option to Download Originals on this Mac;
Store original photos and videos on this Mac. Choose this option if you want to access full-resolution versions of your entire library, even when offline.
I recently decided to start using Launchpad in Yosemite, but I found that on both my MacBook Air and my iMac, there were far too few app icons in the grid.
About 7 years ago, I purchased an HP LaserJet 1018 and have been using it ever since. It’s a basic black and white laser printer, but what I’ve found amazing is that in the 7 or so years that I’ve had it, I’ve only ever needed to replace the toner cartridge once. Back when I first bought it, OS X wasn’t supported, and it unfortunately still isn’t. I used to get it to work using OpenPrinting.org’s Foomatic, a “database-driven system for integrating free software printer drivers with common spoolers under Unix”. There’s still a full install guide here, but what I’ve found over on the Jayway.com is far easier.
I’ve always been a huge fan of TextMate and it was my go-to text editor on Mac for years. I slowly stopped using it when I started my previous job as I never really used my Mac for work purposes anyway. Now that I’m back at university, I’ve had a look at a bunch of text / code / all-purpose editors and decided to try out TextMate 2.0 beta, and so far I’m loving it. What I really love is this feature:
In the past, TextMate has suffered with editing files on a server, but that’s all changed now. If you regularly find yourself SSHed into a remote box and wanting to edit a file using TextMate on your own box, your ship has come in.
TextMate 2 now ships with an rmate (Ruby) script that you can drop onto servers. When you trigger rmate on a remote box, it will connect back to your box, allow you to edit, and update the file on the server with the changes.
(Find out more about rmate on the MacroMate website, including detailed installation and usage instructions.)
This is a great feature as lately I’ve been editing a lot of files on my RaspberryPi, so being able to work with these files locally on my Mac in a powerful editor like this is a huge bonus and something you definitely don’t get with nano. I know there are terminal text editors more powerful than nano out there, but the
rmate feature is really great.
In order to get
rmate working, you must first verify that you have TextMate set up to accept rmate connections. This is done through the Terminal preferences pane. Make a note of the default port (52698) or note down the port you are changing it to, as you will need it later.
In order for
rmate to work, we have to initiate a reverse SSH session to the remote RaspberryPi:
From there, you can install
gem on the remote server:
That’s it! We can now use
rmate to edit remote files. In order to test this:
In order to easily connect to your remote server, I suggest adding an alias to your .bash_profile. In order to do this, open up your .bash_profile (usually
~/.bash_profile) and enter the following:
That’s just one of the cool features, but I’m sure I’ll discover plenty more as I start using TextMate more often again. TextMate also just replaced all other writing apps on my Mac too – I just love its simplicity and power, including my half-a-dozen Markdown editors I’ve been trying.
Revisited is a series of blog posts in which I share words I have previously written for university, school or old blogs. View all Revisited posts here. Revisited 02 – written for: Grand Challanges of Artificial Inteligence (University of Aberdeen), December 2007.
In order to answer this question let’s start by formulating definitions of both machine and know. I will start with a simple dictionary definition of a machine:
“any mechanical or electrical device that transmits or modifies energy to perform or aid in the performance of human tasks”.
I will further focus my discussion on artificial machines such as computers and robots. I have two reasons for this very specific starting point: firstly, I am aware that defining a machine is arbitrary and there is no one agreed and finite definition. Secondly, we need a starting point since the “know” part of the question and our arguments around this could lead us to change or refine the “machine” definition. Having a starting point allows us a common ground to argue on and also aids progress towards a justified answer.
The word knowing feels intuitively simple to define but, although I believe that I know what knowing is, I realise that I cannot easily define it. Therein lays the problem. How do I know that a machine knows if I can’t define know? If I say that knowing something means we have knowledge then I have to define knowledge. We can start with the classic definition of knowledge as “justified, true belief”. This definition however will not bring us much closer to determining whether a machine can know. We need to determine if the machine believes since if it doesn’t believe then it can’t have a belief. Let us investigate the ways at arriving at the belief or, in other words, the ways of knowing. By looking from this angle we can try to bridge from the “how to know” to the population of “knower’s” and conclude whether the machine could be in that population. The main ways of knowing are reason, emotion, perception and language. By looking closer at these ways we may have some chance of reaching a conclusion to our question.
When we use reason, we can either inductively or deductively reach a belief. If I say I know that Thomas is a mammal, I should be able to justify my answer. If I were to use deductive logic, I could use two statements, “All humans are mammals” and “Thomas is a human”. Therefore I could say Thomas is a mammal. Using inductive reason and perception, I could say that I’ve always seen what humans and mammals look like, so I would guess Thomas is a mammal. Perception is largely based on our past experiences and information and knowledge that we have gathered previously and believe to be correct. Whether the reasoning leads to a true belief depends on whether the inputs to the reasoning process were indeed true and justified. Reasoning is an ongoing complex process which does not happen in isolation of language, emotions and feelings and these three are highly intertwined. I have feelings but if I express them in a language unfamiliar to the listener then they are meaningless to that person. If I listen to a topic that I feel strongly about I cannot get emotional about the subject if I do not understand the language. Clearly then language is significant in knowing something.
When relating knowledge to a computer based machine, we can certainly say that its ways of arriving at output or generating its knowledge fit most of the above categories and hence it could possibly be in the knower’s population. Take for example robot X, which has been equipped with some sensors (such as light and heat sensors), a complicated mathematical program and a voice recognition program. Give the creator of this robot some time to calibrate the machine and soon there will be a lifelike figure close to the human, who will be able to answer questions, react to changes in the surroundings and do various other tasks. The robot does all this through logic, perception and language. Emotion in this case would be very difficult to implement into the machine since emotion is a feeling of love, hate, disgust, fear, etc. However according to functionalism, mental states are functional states. Computers are themselves simply machines that implement functions. A functionalist would argue that mental states are like the software states of a computer. According to this theory it is possible that a machine running the right kind of computer program could have mental states. It could have beliefs, hopes and pains. But to what extent will the robot know something? It may be programmed to mimic the reaction to certain feelings, for example a buzzer could sound if it’s too hot, but this doesn’t mean that the machine feels the pain of burning heat. The robot, and many other artificially intelligent machines, are programmed to follow a set of true codes or rules. Following a code is like the human reflex system, it happens nearly instantaneous and automatically. But how many codes can a robot or computer system follow and is it enough to emulate a real human being? Even if you believe the functionalist argument, does it prove that a programmed mental state is equivalent to a brain with feelings? The answer to this is nearly impossible to tell and Alan Turing came up with the Turing test in the 1950s.
The Turing test is an investigation to see how well a computer can achieve life-like conversations. The Turing test involves a computer and a judge (human being). The judge may ask the computer questions or have a normal conversation. The person and the computer are in different rooms so that there is no influence from bodily appearance. The aim is to have a computer trick the judge into thinking it is another human. The reasoning being that if we can’t tell the machine is different from us then we could infer that it has a mind. But will the computer actually know what it is saying or will it just follow the rules it’s been set? We can say that the computer can use deductive logic to assume the right answer and respond to the human by saying what it thinks is correct. The computer can also use inductive logic to build up its database automatically. It can accumulate a linguistic capability but this capacity on its own is no more than the regeneration of symbols. The machine is capable of response but it doesn’t follow necessarily that it has a mind and knows what its responses mean. The Turing test does not give me evidence of the machine’s ability to know something.
Let’s investigate further feelings. The only emotions and feelings you can feel are your own. I know that feelings are being felt by me so I know I have a mind. We can’t touch or see a mind so I know it’s different from a brain. If I dissect a brain (since it’s the part of the body that must hold knowledge) I will never find a physical part that I can touch and say is a feeling. But, if I go back to my earlier statement, I still know I have feelings and I believe this is beyond dispute. This reasoning underlies my belief that there is a separation of mind and body. René Descartes (1596 – 1650) had this idea and believed that mental and physical states are separate from one another. He said that the mind and soul did not follow the laws of physics; however, he still stated that they can affect each other. Taking this a step further I contend that even if an emotion can be described and a physical body such as a machine can be programmed to display the emotion it still doesn’t prove that it feels the emotion. It would have to have a mind to feel and we can’t see the mind. A robot could be programmed to jump for joy if it hears certain words so it can indeed display joy but where is the proof that it feels joy? To be fair we also cannot see the human mind and in reality have no proof that the human feels joy or is just acting. The difference though is that acting out joy in itself requires that we feel the need to pretend to be happy whereas the robot was only reacting to programmed word signals.
To answer the question of whether a machine can know I believe we always come back to the feelings and mind discussion. The human can use all types of the areas of knowing and the ways of knowing. The main differences are that the machine cannot develop its own feelings and emotions to know something. It can certainly be programmed to mimic and it can use reason and perception but only to a certain extent. If it is not interacting with real objects and new events in the world and making up its own mind then its ability to develop its knowledge base is restricted.
To conclude, I believe that the physical state is separate from the mental state (though closely interlinked). If you acknowledge this then there is no basis for believing that a machine has feelings and a mind and therefore I conclude that a machine cannot know. I realise though that to some extent I am relying on belief that I cannot fully justify.
 Lecture 21 Notes by Frank Guerin (http://www.csd.abdn.ac.uk/~fguerin/teaching/CS1013/lec21print.ppt)
Revisited is a series of blog posts in which I share words I have previously written for university, school or old blogs. View all Revisited posts here. Revisited 01 – written for: Professional Topics in Computing (University of Aberdeen), March 2011.
“Privacy is dead – get over it!” – Steve Rambam
In recent months, privacy on the Internet has become a huge topic of discussion. Articles, blog posts, discussion groups and newspapers are all conversing the subject with their own viewpoint whilst social networking websites such as Facebook are defending their core values continuously. Some privacy advocates such as Rambam even believe privacy is dead. In order to fully understand the topic of this discussion, lets firstly glance at the definition of privacy.
Privacy – The state or condition of being free from being watched observed or disturbed by other people: she returned to the privacy of her own home. 
Privacy is a term that has been used for many centuries and in the modern era of computing it can be used to describe the personal privacy regarding transactions of data via the Internet. At its core however, privacy will always have the same meaning as stated above.
Humans have a tendency to want some privacy whilst at the same time we are very curious beings; we want to know and see things that are unfamiliar to us. Technology has always allowed people to feel more connected to each other and with this, we know more and more information about other people. The newspaper brought global information to people, the telephone allowed for instant information to be passed around and now with the Internet we can find out a huge amount of information about places, news, topics and most importantly: people. All these technologies allow for new ways in which privacy can be breached.
There comes the debate of trade-off; ones privacy always correlates to many other factors. Having total physical privacy would mean to live far away from everyone else whereas if you would want to live and interact with other people in a city, you would need to trade off some of your privacy. The same applies for the Internet; one could simply not use the Internet and the vast services available on it and live in complete privacy, but that would mean they are missing out on the use of a huge technology. The more someone uses the Internet, the less privacy they have.
In the Google example above, most people feel that the risk is slight and that the trade-off and value from the services that Google provide are worth it. How far are people willing to go for convenience; just how much information will the average person be willing to give up on the internet? It is as though the more time we spend on the Internet, we are quickly exchanging convenience for our privacy.
A service called Futurephone launched in America just a few years after the Gmail hype; the service allowed anyone to make international calls for free. The service gained many great reviews but one key aspect of the business was questionable: how does Futurephone make money? There was no signup required, no name or address was given, no credit card details were asked for. It was as though Futurephone was the perfect telephony service. Only one thing became clear – they must be listening and recording calls.
It’s not just on the Internet that our privacy has been sunk to a new low. As banks have become extremely popular over the past century, one might be concerned about the information they have about their customers. Credit cards paint a perfect picture of a person’s lifestyle and a privacy intruder could quickly find out vast amounts of information by looking at these facts. But again, consumers often don’t mind the lack of privacy. Consumers don’t mind Supermarkets knowing their buying habits as long as they get discounts on loyalty cards. They don’t mind being personally welcomed to a website as long as it saves them from logging in again. Add all these pieces of information together and a data miner could build a near perfect user profile of a person. In the end, who really cares if a company knows what food you bought?
Living in public has become easier with the Internet; we can create websites that describe ourselves, have videos and photos of our life but most importantly we can have real time video conversations and streams that allow anyone to watch us live our life.
Dan Brown, a now YouTube partner, started making Internet video logs (or vlogs) from his bedroom when he was still in college. He started making these for fun, simply telling the world how his day was and sometimes having a rant at the camera. He quickly got more views and became “internet famous” and as YouTube expanded they asked him to become a YouTube partner. He started making higher production vlogs and shared more of his life on the Internet than ever before. This progressed for a couple years until he released Dan 3.0, his latest vlog project. The project is an Internet experiment; let the world decide how his life progresses. One could say that his life, through the aid of the Internet, has become completely transparent to the public. The public can see all his actions, decide on what he must do and partake in his everyday life events. The on-going experiment is extremely interesting to watch but as it’s a daily summery being posted, the live aspect is missing.
Reverse back to the early dot-com bubble when the Internet was really starting to take of. An Internet entrepreneur of the name Josh Harris envisioned the future to make full use of the Internet; he claimed that his Internet TV stations would quickly overtake major TV channels such as CBS. Harris decided to try out an experiment called “We Live in Public” in which him and his then girlfriend Tanya Corrin would live under 24 hour surveillance viewable by anyone. The experiment was interesting in many ways; it showed us what the power of the Internet has allowed us to do and how privacy could be stolen away from someone completely but most importantly was the psychological aspect of the experiment. Harris had cancelled “We Live in Public” after he had a mental breakdown and couldn’t cope with things. Is this a sign that humans are programmed not to cope with surveillance? An interesting aspect towards transparency in living is that we may feel more inclined to do certain things when we know we are being watched. We know someone may see a purchase on a credit card, so we may deny ourselves from buying an item. The same applies for the Internet; if the Internet becomes so transparent we may start to psychologically feel as though we are continually being watched and judged on the Internet. This could have serious consequences; we may change our buying habits, our social behaviour, the way we interact with other business, just about every aspect of our life could change if we were under heavy surveillance.
It may not be obvious to many, but there is a good chance that surveillance footage or some sort of information has been put online without your acknowledgement. A big headline in recent months has been Google’s Street View service, a service that allows anyone to virtually roam the streets of our planet. The headlines often included homeowners suing Google claiming that Street View images of their houses violated their property and privacy rights. This sparks an interesting debate and the German court quickly vindicated Google Street View operation in Germany. The court’s reasoning for this was “that it is legal to take photographs from street level” and hence allowed for Street View to be operational in Germany. Many people still are unhappy about this, but the arguments on both sides stand valid; people want their privacy vs. “anyone could just take a picture here”.
In the end, this all comes down to the need for a business to make money. For websites to stay online whilst costing nothing to consumers, they need to sell advertising. Advertisements are now highly targeted to users of websites, only displaying highly relevant ads that have a much higher click-through rate than what might be expected from older advertisement on the Internet. There are companies trained to data mine your online behaviour and with this information, they can then go to advertisement companies and aid them in creating these highly specialised ads. All this is made by easy because people simply want convenience; it’s simply much easier to use a free e-mail service provided by Google than setting up your own mail server, configuring ports and account details.
If someone is extremely scared of being public, there are things that can be done to maximise anonymity online. E-mails can be encrypted, allowing only the sender and receiver to understand the message content. Anonymous proxy server networks such as Tor allows users to stay private by tunnelling encrypted traffic through a network of computers and hence their ISP (Internet Service Provider) would not know what website are being visited nor would they know who they are communicating with.
Much has to do with adaptation to change. The world is constantly changing and especially now, with superfast computer chips and silicon technologies moving at a faster pace all the time, change is inevitable. It is only obvious to say that humans are scared of the unknown. We like what we know, and change can scare us quickly. It took the Internet only a few years to reach millions of users; the technology spread faster than the newspaper, radio or television. It has been a huge change to our lifestyle and many people simply do not know what to make of this new era. It is interesting to look at how different generations adapt to the Internet. Many young people do now have a problem with sharing their information on the Internet; sharing videos, photos, their location and contact details are a social standard on networking websites such as Facebook. The older generation are simply not used to this and often find it hard to understand exactly why this shift is happening.
Not everyone is always happy with Facebook however; they are the most criticised website when it comes to privacy concerns. They have often rolled out new features on an “opt-out” scheme; only users who understand the privacy controls good enough will know how to opt out of the new privacy features. In 2007, Facebook launched a new (opt-out) feature for developers of websites called Beacon. This would automatically post news feed updates to ones network on Facebook about activity the user has been undergoing on websites outside of Facebook. The most interesting headline to come out of this privacy debacle is the “Lane v Facebook” case of 2007: a class-action lawsuit in the United States District Court for the Northern District of California. In this court case, Sean Lane bought a diamond ring as a surprise gift on Overstock.com for his wife. The ring was supposed to be a surprise, but without Lane’s knowledge, hundreds of friends on his Facebook account were notified of his purchase. Facebook saw the feature as a “complete new way of advertising online”, but what they hadn’t realised is that they were at that point crossing the line of personal Internet privacy. The idea was clever, but simply failed in execution because our social norms simply don’t allow for such in depth, and quite importantly automated, sharing of information. Some of the acts that were allegedly violated included the Electronic Communications Privacy Act, the Video Privacy Protection Act, the California Consumer Legal Remedies Act and also many others such as the California Computer Crime Law and the Computer Fraud and Abuse Act. The Beacon feature was pulled from Facebook when they updated their privacy settings in December 2007 and although no money was handed out to negatively affected Facebook users of the Beacon program, a $9.5 million fund for privacy and security was created. The plaintiffs did also get some money; Lane received $15,000. Sean Martin and Mohammed Sheikha received $7500. The other representative Plaintiffs each received $1,000. All of this money also came out of the Facebook settlement fund.
So where will the future take us? Will we live in even less privacy? Will we live in public? Users of the Internet will always do what they want. As seen in the Lane v Facebook case about the Beacon program, if a company goes over the top then they will be punished for it. It will always be a difficult decision to state what over the top means, but it is certain that the people will let their voice be heard. In a sense, we’ve always lived in public. We have always physically been visible to other human beings and it’s only personal information such as income, relationships and thoughts that have stayed private and these will always be private if that’s what someone wants. Technology has always made every day tasks easier for us and the Internet has done so in a massive way. The trade-off of privacy and convenience in the case of the Internet has been worth it, but what’s most important is for companies and their websites to become transparent. Education is always key to understand what’s happening and it is a websites duty to make it clear to the user what information is public to the world, public to a friend network and private to just themselves.
The question “Do We Live in Public?” really comes down to how each individual interprets and uses the Internet. Some may remain classical and ignore social networking websites, search engines, et cetera altogether whilst other people won’t mind the trade-off and will happily use the websites. As long as websites have a good transparency and educate the user on their actions, the future should look good.
 “Toor2122 – Steve Rambam – Privacy Is Dead – Get Over It"
 New Oxford American Dictionary
 “Blogger sleuths uncovered its more likely business model: it was exploiting a government subsidy that pays Iowa a few cents per incoming long-distance call. Iowa was sharing the revenue with Futurephone.” – http://www.scientificamerican.com/article.cfm?id=dont-worry-about-whos-watching