Seminar 3 involved looking at our topic area and deciding what we want to look into further. I found a colleague's project quite interesting and such spent a fair amount of time discussing that.
In seminar four we were divided into groups looking into similar subject areas, we had quite an interesting discussion particularly as one group member was very anti-facebook and his project is to create a new social network that doesn't collect much personal information on people, another member of the group was studying privacy issues and raised the point that facebook liase with police enforcement in certain countries. The question then becomes how much information do they share and do they willingly give up information on all of their members? Has the internet in fact made it easier to spy and police people rather than being the home of free speech as it is so often championed to be. There are many high profile cases of the police using social media to catch and prosecute people at the moment that It seems that is indeed becoming easier for people to get hold of information about you. One particularly interesting debate that came up was that the mega upload owners were being extradited to the US of A because their website had a .com domain name and as such was controlled under US law which I was unaware of. The owner's are in trouble for facilitating copyright infringement rather than actually carrying it out ( some more on that here http://www.bbc.co.uk/newsbeat/16855279) , in another high profile case a UK student is being extradited over his website that shared links to TV and Movies online. How can it be that our government is allowing the USA to extradite our citizens that haven't broken the law in our country, surely this is wrong. In addition to this discussion we raised the point of why youtube and facebook have not faced similar action when they allow members to upload when there are often copyright infringing videos on youtube.
I have just finished 5,500 words on online collaboration so I want to focus my research in this area. My research will be a look into what makes people sign up for a website, this will be useful for my final project but also beyond that when I move into industry after. I have several potential membership site clients and this information would be invaluable to help build lucrative online communities. My experiment will therefore focus on the major social networks and on a few smaller networks and will take into account HCI factors with the end result being several web page designs. I will then ask users to sign up to one of the three but not explain why, once inside users will be presented with a short questionnaire and on filling this in an explanation of my research will be shown . I will then seek to analyse the results and test whether the page design that I predicted would be most popular was indeed the most popular.
Oliver's ISAD 504 Blog
Sunday, 5 February 2012
Wednesday, 25 January 2012
Week Two
Hello,
During the lecture this week we were split into groups to discuss our area of interest, I was split into the social networking group. The session started off slowly with only on or two people making suggestions however it slowly grew into more of a discussion. One particular issue that I raised was that of second lives online and just how much people lie about themselves, this was spurred on in part by the channel 4 documentry I watched the evening before in which an 18 year old man was using a chat room to talk to an 18 year old girl. As time went on they traded pictures etc but another man was introduced who worked with the first man, eventually the first man began to think that the second one was also in a relationship with the girl. As the two worked together he became increasingly jealous and eventually shot him in the car park however in the end it turned out that the girl had never met the second man, the first man was in his mid 40's and the girl turned out to be a mother who had stolen their own child's identity! Whilst this is obviously an extreme case it highlights a few issues with the internet and how it allows people to masquerade under any guise they want and do things that they wouldn't normally do. This also raises the issue with regards to how much liability lies with the software developer who's product helped to facilitate the deception. In terms of empirical software engineering it would be interesting to see what factors affect truthful responses and information on a web project, and how these factors could be harnessed to elicit the most truthful response.
My own area of interest at the moment is actually how the internet can be used to facilitate co-operation and stimulate research. This area will be particularly useful to my project as I am developing a piece of software that will enable lecturers to share information and collaborate online. The starting point for my research was the website wikipedia itself who claim that; “Wikipedia contains more than 20 million volunteer-authored articles in over 282 languages, and is visited by more than 477 million people every month, making it one of the most popular sites in the world. “ However academics still do not allow students to reference it due to the nature of the creation process. So it is with this in mind that my next stage of research will be to discover just how effective the peer review process is and that will be my target for the coming week. I have had personal experience with Wikipedia pages before and they generally get reviewed very quickly and the other authors are hard to please, I also have experience on the Xbox Live Indie marketplace where the peer review is the same. So does peer review make for quality information/products?
How much does authorship matter? On Wikipedia it seems that you cannot see who has authored each part of an article, this leads to the question in the previous sentence. It would seem that on WikiGenes[1] that the emphasis is heavily placed on authorship so a user can rate other authors and all of the content on the page shows you who authored it which helps to verify the accuracy of the information.
What I want to leave the post with this week is the example of Goldcorp of Toronto who were struggling to find gold on their land and financially, they took the decision to create a competition with a $575,000 prize and published all of the information that they had on their site. The competition was web-based and allowed thousands of people from a huge variety of fields to apply their thoughts and ideas, the result was the finding of well over $3 billion of gold. [2] It is interesting to see how open collaboration can yield substantial results and in the context of empirical software engineering it would allow experimentation into what online tools facilitate the most profitable collaboration
[1]http://www.wikigenes.org/
[2]http://www.clickadvisor.com/downloads/Tapscott_Innovation_and_Mass_Collaboration.pdf
During the lecture this week we were split into groups to discuss our area of interest, I was split into the social networking group. The session started off slowly with only on or two people making suggestions however it slowly grew into more of a discussion. One particular issue that I raised was that of second lives online and just how much people lie about themselves, this was spurred on in part by the channel 4 documentry I watched the evening before in which an 18 year old man was using a chat room to talk to an 18 year old girl. As time went on they traded pictures etc but another man was introduced who worked with the first man, eventually the first man began to think that the second one was also in a relationship with the girl. As the two worked together he became increasingly jealous and eventually shot him in the car park however in the end it turned out that the girl had never met the second man, the first man was in his mid 40's and the girl turned out to be a mother who had stolen their own child's identity! Whilst this is obviously an extreme case it highlights a few issues with the internet and how it allows people to masquerade under any guise they want and do things that they wouldn't normally do. This also raises the issue with regards to how much liability lies with the software developer who's product helped to facilitate the deception. In terms of empirical software engineering it would be interesting to see what factors affect truthful responses and information on a web project, and how these factors could be harnessed to elicit the most truthful response.
My own area of interest at the moment is actually how the internet can be used to facilitate co-operation and stimulate research. This area will be particularly useful to my project as I am developing a piece of software that will enable lecturers to share information and collaborate online. The starting point for my research was the website wikipedia itself who claim that; “Wikipedia contains more than 20 million volunteer-authored articles in over 282 languages, and is visited by more than 477 million people every month, making it one of the most popular sites in the world. “ However academics still do not allow students to reference it due to the nature of the creation process. So it is with this in mind that my next stage of research will be to discover just how effective the peer review process is and that will be my target for the coming week. I have had personal experience with Wikipedia pages before and they generally get reviewed very quickly and the other authors are hard to please, I also have experience on the Xbox Live Indie marketplace where the peer review is the same. So does peer review make for quality information/products?
How much does authorship matter? On Wikipedia it seems that you cannot see who has authored each part of an article, this leads to the question in the previous sentence. It would seem that on WikiGenes[1] that the emphasis is heavily placed on authorship so a user can rate other authors and all of the content on the page shows you who authored it which helps to verify the accuracy of the information.
What I want to leave the post with this week is the example of Goldcorp of Toronto who were struggling to find gold on their land and financially, they took the decision to create a competition with a $575,000 prize and published all of the information that they had on their site. The competition was web-based and allowed thousands of people from a huge variety of fields to apply their thoughts and ideas, the result was the finding of well over $3 billion of gold. [2] It is interesting to see how open collaboration can yield substantial results and in the context of empirical software engineering it would allow experimentation into what online tools facilitate the most profitable collaboration
[1]http://www.wikigenes.org/
[2]http://www.clickadvisor.com/downloads/Tapscott_Innovation_and_Mass_Collaboration.pdf
Monday, 16 January 2012
Empirical Software Engineering
Hello, my name is Oliver and this is my blog for the ISAD 504 Module of my MSc.
The first seminar session that I attended was on Empirical Software Engineering where I learnt the importance of applying scientific techniques to the practice of software engineering. This is a systematic and logical approach that is based on the principles applied to scientific experimentation whereby we can take measurements and devise hypotheses based on our results.
This approach is somewhat contrary to what us coders tend to do in our work where people such as myself tend to jump in without too much planning or thought about why we're doing what we're doing. Also, I myself have seen this approach in industry where the boss decides that they need to use the latest technology because everybody else is or because somebody recommended it. In particular I found this with social media where people decide that they need to get into the social world without really thinking through their aims, objectives and the method that they will follow to achieve these.
As recommended in Shirley's blog I read this article : http://www.netmagazine.com/features/15-top-web-design-and-development-trends-2012 which I found quite interesting.
The section on the demise of flash and it's predicted resurgence was quite interesting as most people seem to be writing it off these days , also it was a surprise to see Ian Lobb quoted! I found this article has quite an interesting point and also shows that developers tend to be somewhat elitist and determined to use the latest technology rather than the best tool for the job. It also raises the question as to whether the business case is driving the use of the latest technology.
Section 12 of the article regarding mobile workforces was also interesting as we're entering a time when many people are choosing to shun the 9 to 5 lifestyle in favour of freelancing, whether this is because it is difficult to get a job at the moment or some other factor I believe that this can only be a good thing. I expect that we will begin to see many more small (no more than five person) online startups.
In terms of this article and empirical software engineering it would be interesting to measure the engagement time of users on a flash based website against a static website and one written using html 5 and to gather the user's feedback, this could then be used to see whether the technology is improving the user experience. It would also be interesting to look at how much software is produced by small (less than 5 person) teams and their annual turnover to see whether the team size affects the level of success achieved. In this case it would be interesting to see if an end user prefers the software developed by a small company over that of a larger one. This would be of particular use with regards to apps where anyone can create them without a large financial backing.
I stumbled across this article during my reading which actually should have some relevance to my final project. The article titled Semantic web for e-learning bottlenecks: disorientation and cognitive overload quite interestingly studies the limits of what a learner can take in at once and applies it in a web context. This is quite interesting as we can often be bombarded with information on a web page and this can disorient us. In the context of empirical software engineering we could experiment with the amount of different content on a page and user's engagement, from this we could deduce the optimal amount of content that should be shown in our software.
Finally, it seems a bit strange that people are expected to pay for scientific journals these days when information is shared so freely across the web. In fact in this video http://www.ted.com/talks/lang/en/michael_nielsen_open_science_now.html Michael Nielsen provides an interesting view on the currently closed position of academics and makes the case for open research whereby anybody can contribute their ideas and thoughts to a project and he also highlights how freely sharing information has created some powerful open source software.
The first seminar session that I attended was on Empirical Software Engineering where I learnt the importance of applying scientific techniques to the practice of software engineering. This is a systematic and logical approach that is based on the principles applied to scientific experimentation whereby we can take measurements and devise hypotheses based on our results.
This approach is somewhat contrary to what us coders tend to do in our work where people such as myself tend to jump in without too much planning or thought about why we're doing what we're doing. Also, I myself have seen this approach in industry where the boss decides that they need to use the latest technology because everybody else is or because somebody recommended it. In particular I found this with social media where people decide that they need to get into the social world without really thinking through their aims, objectives and the method that they will follow to achieve these.
As recommended in Shirley's blog I read this article : http://www.netmagazine.com/features/15-top-web-design-and-development-trends-2012 which I found quite interesting.
The section on the demise of flash and it's predicted resurgence was quite interesting as most people seem to be writing it off these days , also it was a surprise to see Ian Lobb quoted! I found this article has quite an interesting point and also shows that developers tend to be somewhat elitist and determined to use the latest technology rather than the best tool for the job. It also raises the question as to whether the business case is driving the use of the latest technology.
Section 12 of the article regarding mobile workforces was also interesting as we're entering a time when many people are choosing to shun the 9 to 5 lifestyle in favour of freelancing, whether this is because it is difficult to get a job at the moment or some other factor I believe that this can only be a good thing. I expect that we will begin to see many more small (no more than five person) online startups.
In terms of this article and empirical software engineering it would be interesting to measure the engagement time of users on a flash based website against a static website and one written using html 5 and to gather the user's feedback, this could then be used to see whether the technology is improving the user experience. It would also be interesting to look at how much software is produced by small (less than 5 person) teams and their annual turnover to see whether the team size affects the level of success achieved. In this case it would be interesting to see if an end user prefers the software developed by a small company over that of a larger one. This would be of particular use with regards to apps where anyone can create them without a large financial backing.
I stumbled across this article during my reading which actually should have some relevance to my final project. The article titled Semantic web for e-learning bottlenecks: disorientation and cognitive overload quite interestingly studies the limits of what a learner can take in at once and applies it in a web context. This is quite interesting as we can often be bombarded with information on a web page and this can disorient us. In the context of empirical software engineering we could experiment with the amount of different content on a page and user's engagement, from this we could deduce the optimal amount of content that should be shown in our software.
Finally, it seems a bit strange that people are expected to pay for scientific journals these days when information is shared so freely across the web. In fact in this video http://www.ted.com/talks/lang/en/michael_nielsen_open_science_now.html Michael Nielsen provides an interesting view on the currently closed position of academics and makes the case for open research whereby anybody can contribute their ideas and thoughts to a project and he also highlights how freely sharing information has created some powerful open source software.
Subscribe to:
Posts (Atom)