Monday 29 June 2015

D1 (v2): The Role of TCP/IP and how it links to the Application Layer


Introduction
This report will explain the application layer and how it links to various protocols such as; HTTP (Hypertext Transfer Protocol), HTTPS (HTTP Secure), SMTP (Simple Mail Transfer Protocol) and FTP (File Transfer Protocol). I will also describe each protocol and what they are used for, especially TCP/IP which will be one of the main focuses of this document.
The Internet Protocol Suite
The internet protocol suite or Transmission Control Protocol/Internet Protocol (TCP/IP) is used to provide end-to-end connectivity specifying how data should be packetized, addressed, transmitted, routed and received at the destination. Like a rule book of the internet, how every single packet gets created, sent and retrieved, the internet follows this protocol so that data is sent and retrieved effectively and securely.
The TCP/IP protocol stack consists of four layers, the application layer represented by HTTP (Hypertext Transfer protocol, the Transport Layer represented by TCP and the Data Link Layer. The image below is a graph that represents what data goes through when it gets sent to different targets, every single packet is filled with information that represents these various layers. So the application of which the packet is a part of is marked on the packet, the IP address is also marked upon it and the address of its intended target. TCP/IP allows two devices to communicate with each other and ensures data reaches its correct destination and makes sure that all of the data is received and depending on the protocols used within the layers can also be responsible for sending this data securely.
It is important to note that each layer does not care about the other, and do their own jobs once they receive data from the previous layer. So the application layer only does its job and sends data to the transport layer when it receives data from the client computer. So if the user opened up a browser and went to a website, the protocol HTTP would be used in the application layer based on the user’s input and where they’re trying to connect to.
The Application Layer
The application layer is the topmost layer of the TCP/IP protocol; it includes data for the type of application that is being used for the current process. The application information needs to be set before data goes through the protocol as the data packets need to know what they’re being used for and how. The application layer will use a variety of protocols depending on what software the packet is being sent or received from. It can use the HTTP, FTP, POP (Post Office Protocol) or IMAP protocols. What protocol gets used will depend on the port the client receives data from, for example, HTTP, when used has its packets sent through port 80. Packets have a label in their application layer, which is how it gets detected and brought through the correct port.
The Transport Layer
This layer is what is responsible for splitting data into multiple packets and then reconstructing those packets into the file that was being sent. It also contains the ability for making checks on each individual packet to make sure that when one is destroyed it can be replaced. The transport layer receives data from the application layer, which determines how big each packet is and how many there are, it all depends on the type and size of the file. The protocol used within the transport layer can be TCP or UDP, however TCP is a lot more reliable and secure than UDP and is therefore the most used. The transport layer is also responsible for creating a connection between hosts and moves the packet towards its destination. The transport layer sends the packets onto the network, so that they can be routed and sent correctly to their destination via the network layer.
The Network Layer
The network layer which mainly uses the IP protocol is the layer responsible for the organisation and movement of data on a network. It’s the layer that will deal by the routing of data over various networks, it’s also the layer that will give a packet its IP address for where it came from and the destination address for where it’s being sent to. It can also remove said addresses and then pass it to the transport layer for further unpacking of the packet being processed. Information that the network layer creates will be sent off to the data link layer for the process to finish and send a packet. This is so that the data link layer knows what network it’s going to and what Ethernet card address to put onto the packets. This layer is used for sending packets to their correct destination and also handles the movement of data on a network where hopping takes place.
Data Link Layer
The data link layer or just link layer, adds the hardware addresses to the packets it receives from the network layer and then dispatches the packets onto the local cable that leads to the internet. On a local area network these hardware addresses would be Ethernet card addresses or MAC addresses. It also insures that data has been sent and received successfully, as it’s the last layer of the TCP/IP protocol. It can also detect and in some cases correct errors that occur within the procedure, in the case of TCP/IP it can make sure each network host recognises how much data and what type of data it’s about to receive. The layer deals with the hardware of a server or computer system, and interacts with drivers within the Operating System (OS) and the network interface card attached or embedded within the system. This goes for both the client and the server.

Links to the Application Layer
TCP/IP makes use of the application layer so that each network host can understand what sort of scenario the packets are being sent under. The application layer will change to represent an email and if it were to represent an upload of a .zip file, it also allows for accurate determination of what sort of protocol should be used within itself. So if the application was sending a file, it would use the File Transfer Protocol (FTP) and if it was sending an email it would use the protocols; Internet Message Access Protocol (IMAP) or POP3. The application layer changes what it represents based on the port it goes through, so packets going through port 80 on the internet represent the HTTP protocol.
Without the application layer within the TCP/IP protocol the receiver would not understand what type of data they are receiving, and might ruin the rest of the procedure. If the data isn’t identified it cannot be manipulated accurately and won’t even be understood by both hosts, it would just fail to send all the packets to either host. The internet protocol suite is simply a well-known way of distributing data across the internet, and the layers can interchangeably use different protocols depending on the situation and type of application manipulating the data.
In terms of the application layer it has the most amount of protocols available to it including DNS, FTP, IMAP, POP3 and HTTP. HTTP is a good example as it is responsible for surfing the World Wide Web, when visiting a page, data has to be sent and received from the client and server. The client would most likely send information like where you’re located and inputs you would make on the website. The server would receive this information, and then send back the data that makes up the website.
The layer that determines if it’s a website in the first place is the application layer that also makes use of the HTTP protocol to make sure that website information is sent between the user and server. The application also has links to the HTTPS protocol which is just a more secure version of the HTTP protocol, and allows for safe transferring of data or packets across the internet. The application layer and the internet protocol suite have huge links in this aspect, as the World Wide Web is probably the most popular use of the internet. The application layer is responsible for managing that data and making sure that data is under the right type of software.
So for HTTP the packets are most likely being sent to a browser and being received by a web-server containing the information for the HTTP-based website. This logic applies to email as well, IMAP would refer to email servers used for sending small but understandable amounts of data from different IMAP servers and clients of those servers requesting to view messages.

Application Layer Protocols
The way the application layer functions will depend on the protocol it is running, which in turn depends on the port the user is requesting data from. For example, when data is being sent through port 80 it will use the HTTP protocol, however if the data is being sent through port 443 it will be using the HTTPS protocol, an extended, more secure version of the HTTP protocol. The application layer on its own, as mentioned beforehand, does not interact or care about any other layer activity, it just receives data from the client/server and sends it to the next layer. In the case of TCP/IP it will send the data to the transport layer.
HTTP and HTTPS
HTTP and HTTPS are two different protocols, but contain much of the same architecture and work in the same way. They use two different ports, port 80 for HTTP and port 443 for HTTPS. These two protocols are used in the World Wide Web, a sub-section of the Internet network. They are reserved for website data stored on web servers across the globe, it’s what allows a user to communicate and interact with different webpages.
HTTPS on the other hand is a protocol based on HTTP that provides better security and less risk of a user having their data eavesdropped on. HTTPS is actually a combination of TLS/SSL and HTTP protocols, it’s important to note that websites that have HTTP can also bump up their security using TLS/SSL afterwards and use HTTPS.
IMAP
IMAP is mainly used for email messaging and communicating over special email servers that handle email related data. IMAP uses port 143 while IMAPS (the variant that runs IMAP over SSL) uses port 993. This is mainly used for accounts across multiple devices being connected at once, as IMAP allows multiple clients to be connected at the same time across different devices. This is its primary advantage over POP3 as POP cannot support multiple devices, this is extremely useful in this day and age since people have laptops, tablets and smart phones to make use of the protocol.
IMAP is used to view your email rather than downloading it off of a server to keep it on your client, this saves memory and storage space. This is advantageous in this time since smart phones and tablets need all the data and storage they can get, which is why IMAP is primarily used on mobile devices.
POP3
POP3 is an old but reliable protocol used for Email and goes by default through port 110, POP3S goes through port 995. POP3 downloads Emails, so it’s very similar to protocols that transfer files from one another, except in this case it’s just Emails. POP3 creates a very basic service, if you were to view your email account from one place on POP3, and then tried the same thing on another device you would not see the same content. This is because POP3 downloads to one primary system and then keeps the emails there, which is not affected by servers like IMAP. This is advantageous because it provides security and less chance of someone eavesdropping on your emails, it also means organisations in general will have a harder time looking through your emails.

TLS/SSL and QUIC
Transport Layer Security (TLS) and Secure Socket Layer (SSL) are security protocols used to boost the security of other protocols. It’s constantly used to be attached to base protocols so that it can be made more secure, for example, when TLS or SSL is used on HTTP it becomes HTTPS, a more secure version of the same protocol. They are designed to run on the application layer which is why it only affects the top-layer protocols, it’s why you have things like POP3S and even IMAPS.
This protocol is based on certificates issued that confirm security and legality of the location the user is visiting, companies like Wikipedia and Facebook use TLS 1.2. It’s why HTTPS instead of HTTP appears in the address bar. Google uses its own form of encryption for its websites which is also based on a transport layer protocol, known as QUIC.
QUIC or Quick UDP (User Datagram Protocol) Internet Connections is an experimental encryption method and transport layer network protocol developed by Google themselves. It’s main goal is to improve performance perceived by the user of web applications currently using TCP (since QUIC is also a transport layer protocol, not only an application layer encryption method).
If you were to go on a Google-owned website and view the certificate, you’d see the image on the left.

Conclusion
In conclusion the application layer is one of the needed layers of the Internet Protocol Suite, its links are obvious as it is required to make sure TCP/IP is using the right application, whether that be email, file transfer or the World Wide Web. And in some special cases, can even use a protocol that is also used in the transport layer for security, and transferring of data. However it’s not the most important layer, (or the internet protocol suite wouldn’t also be referred to TCP/IP), but definitely a necessary layer to better organise the huge amounts of data that goes through the giant web of networks known as the Internet.

References


Sunday 28 June 2015

D3: My Website and How it Meets the Defined Requirements and Purpose

Introduction
In this document I will demonstrate that my created website meets the defined requirements and achieves the defined purpose. I will also explain if need be why some requirements could not be met and why it could not completely achieve the defined purpose. Important points to consider in this document are semantic web design, overall design, web accessibility and ease of use.
Requirements
Accessibility
In terms of accessibility, the colour palette that I have selected achieved the desired and adequate text-to-background contrast. This allows people to read it affectively, as having a bad text-to-background contrast will make it hard to read. And people with visual impairments will find it even harder to read than others.
Font size and spacing was made easy to read, and I made sure there’s a difference between a heading, the main title above the navigation bar and the actual navigation bar items. I made sure text in the content of the webpage was not too small or too big, and users could tell the difference between a heading and some content text. It’s also consistent, it doesn’t change its style if you go to a certain page, and it will keep its selected style throughout the entire website.
All links are found on the home page, since they just appear in the navigation bar and the banner. The banner takes you to the homepage and the navigation bar will guide you through the website.
All <img> tags as seen here have alt tags allocated to them.
Identity
The company’s logo is probably when the website falls short, as I am currently deciding between two logos in the demo, the original company logo can be found in the footer. But the new company banner can be found at the top, which also takes you to the home page unlike the company logo. This is the seen in both images, the nearest contains the code for the footer image which is the logo, and the top image shows the code for the banner.
The home page has an introduction that describes the website and why it’s there, it does it well enough that from the home page, the purpose of the website is clear.

















Navigation
Main navigation has been made easily identifiable as it is above the fold and is at the centre of the user’s vision. Since the navigation bar is a series of buttons stating the names of webpages it should let the user know that’s how to navigate through the website. It also has visual animations which work as ques to show that the navigation bar can be interacted with and that each button is clickable.
The navigation bar also changes depending on the page the user is on, the page that is selected will be highlighted yellow to match the content of the page. It is also used to guide the user as to where they are on the website.
Each navigation bar label is clear and concise and contrasts well with the overall theme of the website. The number of buttons is reasonable since there’s a total of five pages in the whole website, so there needs to be a total of five buttons on the navigation bar. The old company logo is not linked to the home page, but the banner is, which is located above the header above the navigation bar. Links are also consistent and easy to identify since the buttons react to the user’s mouse, and the banner is in a blue border showing that it’s a link.
Content
All Major Headings are clear and descriptive, and summarise what’s below, this allows for experienced users to skip most of the website if they need to. It also allows users new to the concept of pc gaming to understand it more in depth.
Critical content is always above the fold on every webpage the user goes through, and as seen throughout the document, styles and colours are consistent using a yellow/blue colour palette. Emphasis (bold) is hardly used in the website, other than for headings and the primary header on the top of the webpage.
A good example of critical content being above the fold would be the construction page where the embedded video appears above the fold so that the user can see the video first before reading the description. If the description was before the video than it would not be an informative page and would be in my opinion failed design.
URLs are meaningful and user-friendly, they appear in the description below the video on the “Construction” page. The URLs are originally part of the original video’s description to credit different sources.
As you can see below different credits are placed, for example the iTunes link to the music used in the video and the YouTube channel that is the owner of the music. There are also meaningful links to the community of the author of the video, and the links to the system components list used in the video.
These are user friendly since they have a description describing where they go before they are listed. So for example “Join our community forum:” is before all the social network links.
HTML titles were not explanatory however, this was not done because I did not see the need to as the title was already on the navigation bar and highlighted for the user. However in the future I will implement page titles as the buttons do not react to screen readers, but it should not be a problem if the title is the same as the navigation bar selection.
Purpose and Target Audience
The purpose of my website was to teach people who were into console gaming the ins and outs of pc gaming. The website should be an introductory stage for pc gamers, and will be able to teach a console gamer to build their own pc and play games on it. This is mostly done via an informative video, but the other pages on the website play a role as well, they are used as descriptive pieces. And of course the store page is a demo page for purchasing items with a validation form at the bottom.
It is to entice people to play on a PC rather than a home console, it would provide reasons and benefits for such a choice to the user viewing the webpage. I believe it does not do a good job of that simply due to the presentation of the website, compared to other sites of other top branches this would not compare to them. This definitely needs improvement and needs to look like a professional website if I want to successfully persuade console gamers to convert to pc gaming.
However the purpose of being informative and helpful to people who do want to build a gaming pc is in my opinion successful. Simply due to the amount of reasons I gave for why a PC, the short lesson on clearing technical jargon and presenting those components for their very first budget pc build. The video on the construction page I believe actually achieves all of the topic above as it’s an informative and well-presented video (courtesy of LinusTechTips on YouTube) which will make people think about getting a pc. Especially since the pc presented is not that expensive.
The purpose is suited to a certain targeted demographic of people, the target audience is for teenagers mainly to people around the age of 20/21. It’s mainly for teenagers and young adults that are able to get a job, and want to spend their money on a gaming PC. Some younger teenagers also save up their money for a gaming pc or a console, and I want to make sure they come to the right decision for a better gaming experience.
Can my Website be improved?
I do believe my website can be improved and should be improved, it needs to be in the development stage for a lot longer so that the presentation and theme can be straightened out. There also needs to be some consideration for the visually impaired as this website would not be completely compatible with screen readers. However it does follow the law, so there is no problem with <img> alt tags.
The W3C recommendations need to be fully covered as shown in the D2 submission, some of the recommendations they talk about have not been fully checked, and so my webpage is not fully accessible to the world. This needs to be done before the website ever gets published.
Differences to Original Design
The final website is mostly different to the original design as seen in my submitted wireframes, a lot of elements changed and did not become a part of the final design. I do regret some changes, I believe the header should have stayed the same as the intended design as it would make it simpler. In the future I will most likely move the logo back to its intended spot along with the header. The home page was not changed much except that the images do not have a description, as they didn’t really need one in the first place.
C:\Users\James Whiting\Google Drive\Documents\James super working folder swag\BTEC IT Level 3 Extended Diploma\!Unit 20+28 Web Development & Production\Graeme\Wireframes\Page 1, Home.png
The “Why a PC” page is completely different to the intended design. It is just a big list of reasons why someone should go for the pc, this was mostly due to time constraints with the project.
C:\Users\James Whiting\Google Drive\Documents\James super working folder swag\BTEC IT Level 3 Extended Diploma\!Unit 20+28 Web Development & Production\Graeme\Wireframes\Page 2, Why a PC.png


The “Where to Begin” page was changed due to the realisation that it would not be compatible on mobile smart phones if it was implemented as intended in the design. So I changed it to the same layout as the home page which presented no issues.
C:\Users\James Whiting\Google Drive\Documents\James super working folder swag\BTEC IT Level 3 Extended Diploma\!Unit 20+28 Web Development & Production\Graeme\Wireframes\Page 3, Where to Begin.png
My biggest regret is the store page, I was planning to programme a carousel design with selecting store items, and below would appear an information box containing the item name, item price and item type, along with an image and description of the item. This was simply not possible as I did not have the technical capability of achieving such a desired effect, combined with the time constraints, I simply did not have enough time to learn how to achieve the effect and implement it into the website within the deadline. The store page was changed to a normal grid layout with a JQuery popup box code that shows information when the user hovers their mouse over any of the store items.
C:\Users\James Whiting\Google Drive\Documents\James super working folder swag\BTEC IT Level 3 Extended Diploma\!Unit 20+28 Web Development & Production\Graeme\Wireframes\Page 4, The Store.png


The “Construction” page actually turned out better than its intended design, the implementation of the video was a much better solution than just images and text.
C:\Users\James Whiting\Google Drive\Documents\James super working folder swag\BTEC IT Level 3 Extended Diploma\!Unit 20+28 Web Development & Production\Graeme\Wireframes\Page 5, Construction.png
Conclusion
In conclusion the final website successfully meets the defined requirements and satisfies the needs of those who want to build their own PC. I believe it has met its purpose even though I regret some design changes to the website, it has still fulfilled the intended goals. This website will most definitely need improvements as it needs to be completely accessible to those who are disabled and those who have a visual impairment. This is also to maximise the audience the website can be viewed by, because if it isn’t easily readable by the disabled, then you are cutting out a huge audience base.
The design phase could’ve been done with a lot better results as I did not realise my technical inability to create the things I envisioned, like the carousel feature on the store design. Now that I have learnt a lot more about HTML, CSS and JavaScript any future websites I create will not have these design flaws.
Overall I am impressed with what I have created, as it fulfils all defined requirements and the purpose I originally thought of. I would say the most important aspect of this website is the colour palette, as that is the only thing to stay true from the start of the design to the finished product.

References
http://www.w3.org/TR/WCAG10/full-checklist.html
https://tecnocode.co.uk/2005/11/14/semantic-web-design/

http://www.thesitewizard.com/gettingstarted/startwebsite.shtml

Saturday 27 June 2015

D2: The World Wide Web and User Access to Information

Introduction
In this document I will be discussing the techniques that can be used on web pages to aid user access to information. I will also be discussing, explaining and going through the history of the World Wide Web (WWW) from the very first website to the implementation of HTML5 (Hypertext Mark-up Language 5) and popularity boost of JavaScript and the creation of web applications and cloud access. Levels of web accessibility will also be discussed in this document.
History
The techniques that can be used on web pages to aid user access to information is a huge subject to cover and to cover it affectively the history of the WWW has to be covered.
The First Website
The first website ever created was purely created using HTML, and was used just to display text across a network, using a single server scientists could post their research documents and easily have others view their work and comment on it. At this time (1990) the WWW was not a well-known and heavily used implementation of the internet and was instead used for presenting research papers across a network for multiple scientists to read.
It looks just as you expect.
This brings us to the very first technique that aided the access of information, which is the simple presentation of text-based information. HTML allowed people to present hypertext information over a very large network, which will only grow bigger down the line. This is the biggest and first component, as without text, websites would not be very informative, basic text is an effective and simplest way of aiding the access of information for the user. The Web was invented by Tim Berners-Lee with HTML as its publishing language (which lasted about 7 years) this was actually invented at CERN, the European Laboratory for Particle Physics located in Geneva, Switzerland.
Tables and Magazine Layout
Tables were introduced in 1991 and onwards and was implemented within the Viola web browser which became one of the most sophisticated browsers at the time. Viola was the first browser that supported things like style sheets, tables and other nest-able HTML element. Tables and the magazine layout helped extremely with new websites that tried to organise their content. Instead of websites just being plain text, websites could now feature a table layout to place button-like functionality and allows developers to create a user interface.
This was the rise of importance related to the user experience of a website and the user interface, and became a more important aspect of websites. This aids user access to information because it makes every website easier to understand and easier to navigate. It speeds up the access of information and therefore aids the user’s access to the information. Since now it’s apparent where to go in a certain website for example:
BBC in 2001 used tables for their website to display information, this allowed for easy navigation of their website at the time. Which provides a greater access to information as you can visually indicate where to go and present information in a structured way to the user.

Cascading Style Sheets (CSS)
CSS was a very important technique that aided in the user access to information, CSS1 allowed for developers to change font properties, the colour of the font and background. It of course allowed for cascade functionality to do with style sheets, and allowed HTML to reference to a CSS stylesheet which would change the visual interpretation of the website.CSS allowed for more variety in the choice of typeface and allowed for different colouring of certain elements of the web page, like the text and background colours. CSS1 was introduced December 1996 and wasn’t originally intended to be a layout scripting language, and instead was aimed at just designing the finished product. However it did support the “float” but didn’t support “position”.
When CSS was thought of as a layout language was when the second iteration of CSS came about, CSS2. CSS2 supported tables and was released in May 1998, but the only indication that showed it was meant for the layout of a HTML webpage was W3C’s recommendation “9.6.1 Fixed positioning …. Authors may use fixed positioning to create frame-like presentations”. This is the BBC website from the first page but in 2004, here you can clearly see the benefits of CSS and how its layout system could be beneficial.
After this was realised, more and more websites starting using CSS2 for web layout, completely ignoring tables and the magazine layout it could achieve. This caused W3C to create new standards for web design, incorporating a new idea known as semantic web design.
Semantic Web Design
Semantic Web Design is making sure every tag is doing what they’re supposed to do. In the case of the <table> tag it is not meant to be used for layout, it’s meant to be used for presenting information in a table like on a word document. HTML was not made to be used as a layout language, so no tag should be used for the goal of laying out the webpage. It was when CSS2 came in that Semantic Web Design became standardised.
Semantic web design also includes the usage of headings, each of these headings should only contain information relating to the content of the webpage. For example using the word lucky randomly on a page within h4 tags is not semantic. Since you would be using HTML for formatting, but only CSS should be used for formatting in semantic web design.
So now, HTML is only used to present information in a text format, just like when it was first created in 1990. Because of this semantic web design, it means you’re website should run on its own without any CSS, if it doesn’t, it is most likely you did not follow semantic web design. So my own website accomplishes semantic design rather well because as you can see it still runs normally without any CSS.
With CSS you can see that all text has been formatted in some way and that a standard layout has been established. Nothing resembles the original HTML other than the text that appears on the screen
When CSS gets removed, all formatting is gone, and the layout now brings everything to he left and only depends on the HTML tags used that have a page break. For example the paragraph tag in HTML has a page break, so there is a line spacing between each chunk of text within the page. All the images have returned to their original size, as the CSS resized them to a more acceptable size suitable for the website.
When your website shows this clear of a difference when you remove the CSS it shows that you’ve accomplished semantic web design, as you did not use any tags the way they weren’t meant to be used. Truly everything at its core element has remained the same, it’s just that it now looks different, and it’s lost its font and other formatting. It’s also a good way to see if you used any HTML tags for layout, which would be a big fault in your programming.
Rise of Adobe Flash
There were multiple attempts at making animations appear on a webpage, one of the more successful (but still unsuccessful at the end) attempts was the implementation of Adobe Flash. Adobe Flash when first created allowed developers to implement Flash animations into their website. However Flash did have a few problems initially, like being unable to be used with screen readers and requiring the installation of an extra piece of software on the client and server to work. This made Adobe Flash very bloated, but it was the best thing for its use out there at the time.
Adobe Flash was used in a time where a lot of features a website lacked were fulfilled using plug-ins, these themselves were quite bloated software. And don’t even think of smart phone compatibility, as these plug-ins could drain too much battery life, this was such a problem that Apple openly denounced Adobe Flashed, and restricted it on their Safari browsers. Apple and other companies agreed that they needed an open-source alternative which was lightweight and apart of the infrastructure.
The Rise of HTML5 and CSS3
HTML
This open-source alternative was realised with the introduction of HTML5 and CSS3, which provided support for video and a huge array of different HTML5 based animations. Instead of companies resorting to GIFs to show a rotating logo, they could now do it with HTML5’s native compatibility with video. HTML5 has dedicated video and audio tags that support a range of common video and audio types, this allowed video with audio to be streamed on a website natively.
HTML5 was proposed in 2008, with the actual release happening in October 2014 due to the rise of smart phones, HTML5 had increased features that supported them. With video and audio appearing in HTML all smart phones including iPhones could display video on their phones through the web browser natively. This reduced the amount of battery drain and it wasn’t too stressful on the system, so smart phones remained steady in terms of their performance.
CSS3
CSS3 provided new features like rounded corners, gradients, animations, transitions and shadows, it also provided new layouts like flexible box or grid layouts and multi-columns. It also provided transparent colour as CSS3 now supports the alpha channel, so now there’s four values instead of just three (before CSS3 only three colours were used: rgb(0,0,0) with CSS3 you can now use: rgba(0,0,0,0.0)).
CSS3 combined with HTML5 utterly destroyed any plug-ins that provided extra functionality in terms of web development because it was native and hardly resource-intensive unlike Adobe Flash. This lead to the downfall of Adobe’s software and it started falling into obscurity, it hit hard and Adobe announced that 11.1 would be the final version for smart phone devices.
Downfall of Adobe Flash
With the rise of HTML5 and CSS3, came the downfall of Adobe Flash and other similar plugins like Shockwave Flash. Since there was an alternative easier way to implement video, animation and sound within web browsers (that was also native) companies like Google, Apple and Microsoft, started shying away from things like Adobe Flash. YouTube, Google’s biggest branch company now uses HTML5 based videos rather than Adobe Flash based videos, which it used to use before the implementation of HTML5.Eventually Adobe Flash fell into obscurity, as it was too risky for a company to invest in such a programme that forced users to download an external piece of software for it work on their system. Flash still has some presence on the internet today, but as for mobile devices and YouTube, Adobe Flash has ditched for the more suitable HTML5. Adobe admitted themselves that Flash has fallen and explained the reasons their own piece of software failed.
JavaScript
JavaScript is a scripting language used alongside CSS and HTML to add interactivity, simple animation and security to web pages. JavaScript can also change document content that is displayed to the user, JavaScript, when used, is client-side so the user’s computer is what is doing the processing for JavaScript when it gets used. Despite the name Java and JavaScript are completely different to one another, have different semantic and their syntax derives from different source (since JavaScript’s syntax is actually derived from C).
JavaScript is very useful for creating interactive web design, and is a third layer of a web page (if the developer chooses to use it). For example in my website JavaScript has been used to display animations when the user hovers their mouse over the navigation bar, it also allows for popup boxes to show when they hover their mouse over the store items in the store page of my website.
It also allowed for validation to be programmed, this is very useful for forms (for example, if you’re filling out a form to purchase an item), in JavaScript you can programme elements to only accept a certain value, or to display an alert when the submit button is pressed depending on the content inputted. So if the “email” field in my website does not contain the symbol “@” it will display an alert message that states the email is invalid.
The developer of JavaScript was Netscape, but was personally developed by Brendan Eich while he was working for Netscape Communications Corporation. It was first developed under the name Mocha, but was beta released under the name Live Script, it was shipped in beta releases of Netscape Navigator 2.0 in 09/1995. It was eventually renamed JavaScript.
JQuery
JQuery is not a new programming language, rather it is a huge library of JavaScript functions that makes developing with JavaScript a lot easier. It was developed with ease of use in mind and has pre-built functions that users don’t have to recode themselves to implement things like animations and effects into their website. It’s well-known for speeding up the development time of any web developer and is a tool used almost every day by JavaScript programmers, JQuery’s motto is even “Write less, do more”.
The only way this really improves the access to information is that it will get websites published a lot faster, so users will receive their information a lot quicker. Interactive information is valuable as it will provide a visual way of presenting a lot of information without a lot of working. “It was originally released in 01/2006 at Bar Camp NYC by John Resig and was influenced by Dean Edwards’ earlier cssQuery.”
Server-side Scripting and Client-side Scripting
Server-side scripting is separate from JavaScript, as you have many other programming languages to use for server-side scripting. To name a few, you can use “MySQL”, “Python” and “Perl”. Server-side scripting allows a server to process a request from a client and run scripts on their end and send the client the webpage with the server-side script running on it. However servers are already tasked with a heavy load of moving data around like accounts and databases. The environment the server-side scripting is run in is a web server, rather than a web browser with client-side scripting.
So server-side scripting would only contain things like validation and form based data, to compare a user’s input to the server’s database to confirm someone’s identity, bank details or even poll answers. It would be in no way used for the things JavaScript is used for like loading animations or effects onto a webpage. This is when these two processes of loading interactivity and interacting with information is separated into client-side scripting and server-side scripting. This means server-side scripting provides dynamic content, as it depends on the user what information is selected from a database to provide a certain HTML website.
This greatly affects the user access to information, as the information is tailored to each user, and this ends up providing information the user wants rather than all of the information the user needs to sift through. Of course this depends on the cloud/web application the user is using, but overall it’s an improvement on the type of information the user receives. Rather than unwanted information the user will receive information they’re interested in and this ends up bettering the user experience of access to information.
Client-side scripting is like JavaScript that was discussed above, it will most likely be used to display animation, effects and other types of interactivity like drop-boxes. This is done to reduce stress on the server and has a set of advantages, as it makes the client’s computer perform better when there is a balance struck between the amount of scripting on both the client-side and server-side.
Web Applications and Cloud Access and Applications
JavaScript and HTML5 enable for more than just simple animations and interactivity, they can also enable the usage of cloud applications and give access to the cloud through client and server-side scripting. A good example of a cloud application with cloud access would be Google Docs with Google Drive, and Office365 and Microsoft Word. These two cloud applications are cloud office suites that can be accessed through a browser rather than on your own PC.
There are also web applications which are designed to be sorely run on a web browser things like electronic banking (Santander) and online shopping applications (Amazon) are considered web apps. They use a combination of server-side and client-side scripting to accomplish a web app that could also be run offline (but obviously would hinder most of the functionality). Web apps are known to not have a lot of customisation and tend to be the same for every user.
Cloud apps on the other hand are extremely customisable and each changes depending on the user account using it, for example: In Google Drive, your entire folder and contents change depending on the account you’re logged into, if you open a Google Doc that can change completely depending on the Doc/Word file that you opened.
Levels of Web Accessibility
Web accessibility is an important consideration for the internet as a whole, as it is directed at those with visual impairment and other disabilities. A small example of accomplishing some form of web accessibility (this is accomplishing Priority 1.1 on the Web Content Accessibility Guidelines (WCAG) 1.0 and also abides by one of the laws of the World Wide Web that could get you into legal trouble if broken) is making sure every non-text element like an image or video has an alternative text element. This is done in the form of alt tags on images and animated gifs.
In my own website I have done this to make sure it’s along the lines of the law and is accessibly by the visually impaired:
So now the screen reader will not ignore the images and will read out the alt tags in place of them.
The reason this is required is so that screen readers like “JAWS” can be used to read the alternative text in place of the image, this allows for a visually impaired person to grasp the idea of the image without having to see it visually. Officially most of the WCAG is a recommendation, but it is actually against the law to not include alt tags with your images, as it severely affects those visually impaired who browse the web. Companies should have users with a range of different disabilities test their website so that they are conforming to a web accessibility standard.
Web accessibility comes in three levels, Priority 1 (being the highest and most important), Priority 2 (still quite important but optional), Priority 3 (Not as important as the first two). I’ll go through each of them and explain how it helps user access to information, especially for the disabled and visually impaired. I will also show how I have met the priorities discussed in my own website.
Priority 1
Within priority 1, 1.1 has been achieved as seen above in page 10, as I have described every image with an alt tag for screen readers. 2.1 is also achieved as seen above in page 5 at the same time 6.1 has been achieved in this way, this allows user access to information as it doesn’t prevent those that only use HTML from the information in the website.7.1, 6.2 and 6.1 are not applicable to this website as it is just a demo. 14.1 has been achieved as it uses English (UK) and (US) throughout, its purpose was designed for English-speakers only so that’s also achieved.
Priority 2
2.2 has been achieved and is proven through images throughout this document, the colour palette I chose was used especially for contrast and made sure that text was dark blue against a light yellow. Most of these aspects have been covered other than 3.2 as I do not have the technical capability to create validation for such a thing. Quotations haven’t been used so no mark-up quotations are required. 7.2, 7.4, 7.4, 11.1, 11.2, 13.2, 13.3 are not applicable. 10.1 was not done because I believe that this would’ve affected the user experience greatly in a negative way, since they would need to go through a notification box to just see information about the item they wanted to purchase.
Labels are consistent as seen in page 8.
Priority 3
If the user chooses to go in order of webpages (from left to right) then the acronyms used on the store page will be first listed and explained in the “Where to Begin” page. If it’s a new user they will most likely click this button as they do not understand the PC gaming market, so they will learn these acronyms beforehand, but if it’s an experienced user they will not need to visit the page. 4.3 was not completed because the site is a demo, and might undergo language changes.
Conclusion
In conclusion user access to information has been worked on for a long time now, features and functionality have been added due to the huge increase of the size of the internet and how many people use the internet for the World Wide Web. So much so that the World Wide Web is being mistakenly called the Internet, with so much traffic here, there needs to be a globalized standard of web development so that users can have a proper access to information.
It first began with just text-based websites, and then came the inclusion of the “<img>” tag which vastly effected web development. So much more functionality and features has been added to the World Wide Web since that point in time. We now have HTML5 which supports full blown video media, animations and with the coming of CSS3 and JavaScript purely Semantic Web Design can now be achieved. With HTML being just a mark-up language like it used to be, CSS being used sorely for formatting and layout, and JavaScript used for interactivity and enhancing the user experience.
With that comes support for those who are visually impaired or who have some other sort of disability that makes the Web hard to use. The three priorities and W3C’s recommendations combined with the inclusion of primary ones within the law is an important step for those with a disability of some sort browsing the web. It prevents their discrimination, and most importantly gives them access to information just like any other user on the World Wide Web.
 

References

https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web#1980.E2.80.931991:_Invention_and_Implementation_of_the_Web
http://info.cern.ch/hypertext/WWW/TheProject.html
http://www.w3.org/People/Raggett/book4/ch02.html
http://www.barrypearson.co.uk/articles/layout_tables/history.htm
https://en.wikipedia.org/wiki/Cascading_Style_Sheets
https://tecnocode.co.uk/2005/11/14/semantic-web-design/
https://www.youtube.com/watch?v=IsXEVQRaTX8
https://developer.mozilla.org/en/docs/Web/CSS/CSS3
http://mashable.com/2011/11/11/flash-mobile-dead-adobe/
https://en.wikipedia.org/wiki/JavaScript
https://jquery.com/
https://en.wikipedia.org/wiki/JQuery#History
http://www.sitepoint.com/server-side-language-right/
http://www.pythonschool.net/server-side-scripting/introduction-to-server-side-scripting/
http://www.sqa.org.uk/e-learning/ClientSide01CD/page_18.htm
http://www.w3.org/TR/WCAG10/full-checklist.html
http://www.out-law.com/page-330