ICT
Sabado, Pebrero 20, 2016
Sabado, Enero 30, 2016
Biyernes, Enero 15, 2016
Biyernes, Nobyembre 6, 2015
Linggo, Oktubre 11, 2015
HTML5 xx
HTML5 is a markup language used
for structuring and presenting content on the World Wide Web. It was finalized, and published, on 28 October
2014 by the World Wide Web Consortium (W3C).[2][3] This is the
fifth revision of theHTML standard
since the inception of the World Wide Web. The previous version, HTML 4, was
standardized in 1997.
Its core aims are to improve the language with support
for the latest multimedia while keeping it easily readable by humans and
consistently understood by computers and devices (web browsers, parsers, etc.). HTML5 is intended to subsume not only HTML 4, but also XHTML 1
and DOM Level 2 HTML.[4]
Following its immediate predecessors HTML 4.01 and
XHTML 1.1, HTML5 is a response to the fact that the HTML and XHTML in
common use on the World Wide Web have a mixture of features introduced by
various specifications, along with those introduced by software products such
as web browsers and those established by common practice.[5] It is
also an attempt to define a single markup language that
can be written in either HTML or XHTML. It includes detailed processing models
to encourage more interoperable implementations; it extends, improves and
rationalizes the markup available for documents, and introduces markup and application programming
interfaces (APIs) for
complex web applications.[6] For the same
reasons, HTML5 is also a potential candidate for
cross-platform mobile applications. Many features of HTML5 have been
built with the consideration of being able to run on low-powered devices such
as smartphones and tablets. In December 2011, research firm Strategy Analytics
forecast sales of HTML5 compatible phones would top 1 billion in 2013.[7]
In particular, HTML5 adds many new syntactic features. These include the new
<video>
, <audio>
and <canvas>
elements, as well as the integrationof scalable
vector graphics (SVG) content (replacing
generic <object>
tags),
and MathML for
mathematical formulas. These features are designed to make it easy to include
and handle multimedia and graphical content on the web without having to
resort to proprietary plugins and APIs. Other new page
structure elements, such as <main>
, <section>
, <article>
, <header>
,<footer>
, <aside>
, <nav>
and <figure>
, are designed to enrich thesemantic content
of documents. New attributes have
been introduced, some elements and attributes have been removed and some
elements, such as <a>
,<cite>
and <menu>
have
been changed, redefined or standardized. The APIs and Document Object Model (DOM) are no longer afterthoughts, but
are fundamental parts of the HTML5 specification.[6] HTML5 also
defines in some detail the required processing for invalid documents so that
syntax errors will be treated uniformly by all conforming browsers and other user agents.[8]
Photo Credits to: http://www.w3.org/html/logo/downloads/HTML5_sticker.png
Biyernes, Oktubre 2, 2015
Computer Programming xx
Computer programming (often shortened to programming) is a process that leads from an original formulation of a computingproblem to executable computer programs. Programming involves activities such as analysis, developing understanding, generatingalgorithms, verification of requirements of algorithms including their correctness and resources consumption, and implementation (commonly referred to as coding[1][2]) of algorithms in a target programming language. Source code is written in one or moreprogramming languages. The purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem. The process of programming thus often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.
Related tasks include testing, debugging, and maintaining the source code, implementation of the build system, and management of derived artifacts such as machine code of computer programs. These might be considered part of the programming process, but often the term "software development" is used for this larger process with the term "programming", "implementation", or "coding" reserved for the actual writing of source code. Software engineering combines engineering techniques with software developmentpractices.
Within software engineering, programming (the implementation) is regarded as one phase in a software development process.
There is an ongoing debate on the extent to which the writing of programs is an art form, a craft, or an engineering discipline.[3] In general, good programming is considered to be the measured application of all three, with the goal of producing an efficient and evolvable software solution (the criteria for "efficient" and "evolvable" vary considerably). The discipline differs from many other technical professions in that programmers, in general, do not need to be licensed or pass any standardized (or governmentally regulated) certification tests in order to call themselves "programmers" or even "software engineers." Because the discipline covers many areas, which may or may not include critical applications, it is debatable whether licensing is required for the profession as a whole. In most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined (e.g. United States Air Force use of AdaCore and security clearance). However, representing oneself as a "professional software engineer" without a license from an accredited institution is illegal in many parts of the world.
Another ongoing debate is the extent to which the programming language used in writing computer programs affects the form that the final program takes.[citation needed] This debate is analogous to that surrounding the Sapir–Whorf hypothesis[4] in linguistics and cognitive science, which postulates that a particular spoken language's nature influences the habitual thought of its speakers. Different language patterns yield different patterns of thought. This idea challenges the possibility of representing the world perfectly with language, because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community.
Ancient cultures seemed to have no conception of computing beyond arithmetic, algebra, and geometry, occasionally devising computational systems with elements of calculus (e.g. the method of exhaustion). The only mechanical device that existed for numerical computation at the beginning of human history was the abacus, invented in Sumeria circa 2500 BC. Later, the Antikythera mechanism, invented some time around 100 BC in ancient Greece, is the first known mechanical calculator utilizing gears of various sizes and configuration to perform calculations,[5] which tracked the metonic cycle still used in lunar-to-solar calendars, and which is consistent for calculating the dates of the Olympiads.[6]
The Kurdish medieval scientist Al-Jazari built programmable automata in 1206 AD. One system employed in these devices was the use of pegs and cams placed into a wooden drum at specific locations, which would sequentially trigger levers that in turn operated percussion instruments. The output of this device was a small drummer playing various rhythms and drum patterns.[7] The Jacquard loom, which Joseph Marie Jacquard developed in 1801, uses a series of pasteboard cards with holes punched in them. The hole pattern represented the pattern that the loom had to follow in weaving cloth. The loom could produce entirely different weaves using different sets of cards.
Charles Babbage adopted the use of punched cards around 1830 to control his Analytical Engine. Mathematician Ada Lovelace theorized beyond the original intent of the Analytical Engine and how it could compute symbols as well as numbers, building the foundation of modern programming: “That [the engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine.”[8] She wrote a program for the engine to calculate a sequence of Bernoulli numbers, becoming the world’s first programmer.[9]
In the 1880s, Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, above, had been for lists of instructions (not data) to drive programmed machines such as Jacquard looms and mechanized musical instruments. "After some initial trials with paper tape, he settled on punched cards..."[10] To process these punched cards, first known as "Hollerith cards" he invented thekeypunch, sorter, and tabulator unit record machines.[11] These inventions were the foundation of the data processing industry. In 1896 he founded the Tabulating Machine Company (which later became the core of IBM). The addition of a control panel (plugboard) to his 1906 Type I Tabulator allowed it to do different jobs without having to be physically rebuilt. By the late 1940s, there were several unit record calculators, such as the IBM 602 and IBM 604, whose control panels specified a sequence (list) of operations and thus were programmable machines.
The invention of the von Neumann architecture allowed computer programs to be stored in computer memory. Early programs had to be painstakingly crafted using the instructions (elementary operations) of the particular machine, often in binary notation. Every model of computer would likely use different instructions (machine language) to do the same task. Later, assembly languages were developed that let the programmer specify each instruction in a text format, entering abbreviations for each operation code instead of a number and specifying addresses in symbolic form (e.g., ADD X, TOTAL). Entering a program in assembly language is usually more convenient, faster, and less prone to human error than using machine language, but because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets also have different assembly languages.
The synthesis of numerical calculation, predetermined operation and output, along with a way to organize and input instructions in a manner relatively easy for humans to conceive and produce, led to the modern development of computer programming. In 1954,FORTRAN was invented; it was the first widely-used high level programming language to have a functional implementation, as opposed to just a design on paper.[12][13] (A high-level language is, in very general terms, any programming language that allows the programmer to write programs in terms that are more abstract than assembly language instructions, i.e. at a level of abstraction "higher" than that of an assembly language.) It allowed programmers to specify calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text, or source, is converted into machine instructions using a special program called a compiler, which translates the FORTRAN program into machine language. In fact, the name FORTRAN stands for "Formula Translation". Many other languages were developed, including some for commercial programming, such as COBOL. Programs were mostly still entered using punched cards or paper tape. (Seecomputer programming in the punch card era). By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. (Usually, an error in punching a card meant that the card had to be discarded and a new one punched to replace it.)
As time has progressed, computers have made giant leaps in processing power, which have allowed the development of programming languages that are more abstracted from the underlying hardware. Popular programming languages of the modern era include ActionScript, C, C++, C#, Haskell, Java, JavaScript, Objective-C, Perl, PHP, Python, Ruby, Smalltalk, SQL, Visual Basic, and dozens more.[14] Although these high-level languages usually incur greater overhead, the increase in speed of modern computers has made the use of these languages much more practical than in the past. These increasingly abstracted languages are typically easier to learn and allow the programmer to develop applications much more efficiently and with less source code. However, high-level languages are still impractical for a few programs, such as those where low-level hardware control is necessary or where maximum processing speed is vital. Computer programming has become a popular career in the developed world, particularly in the United States, Europe, and Japan. Due to the high labor cost of programmers in these countries, some forms of programming have been increasingly subject to offshore outsourcing (importing software and services from other countries, usually at a lower wage), making programming career decisions in developed countries more complicated, while increasing economic opportunities for programmers in less developed areas, particularly China and India.
Mag-subscribe sa:
Mga Post (Atom)