Tampilkan postingan dengan label Artikel. Tampilkan semua postingan
Tampilkan postingan dengan label Artikel. Tampilkan semua postingan

Kamis, 23 Desember 2010

World Future IT is on Cloud Computing

Cloud Computing is expected to change IT in large companies because it enables enterprises of any size to exploit economies of scale and benefit from paying only the resources that are used alone.

many aspects of computerization that has been (or will be) available in the form of cloud services: Infrastructure as a Service (IAAS) and Amazon Services, Microsoft Windows Azure, VMWare vCloud and Eucalyptus and Cloudera that open source provides komputansi, networking and storage capacity of the elastic. as well as Software as a Service (SAAS) and Platform as a Service (PASS).

With cloud computing, heterogeneity has become a major characteristic of komputansi. Resources in the cloud can be proprietary or open-source or a mixture of both.

With the growth of cloud computing, it is increasingly important for policy makers to ensure that domestic policies do not favor a particular technology. Not only to meet the needs of global trade, international law and the interests of economic growth, it also allows domestic companies to reap huge profits from the opportunities provided by cloud computing.

Written by, Stacy Baird, a former adviser to the U.S. Senate on issues of Technology and Intellectual Property Rights. source : detikinet

Read More..

Rabu, 22 Desember 2010

Cloud Computing, the difficulties that it is not impossible in Indonesia

Cloud computing is Internet-based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid. Cloud computing is a natural evolution of the widespread adoption of virtualization, Service-oriented architecture and utility computing.

Cloud Computing is believed to be beneficial in the government of Indonesia. However, adoption is still a big challenge.

"Every country has its own unique challenges but, in general, all governments face similar problems relating to how they spend their budgets on infrastructure and social services and citizens," said Ken Wye Saw, Vice President of Public Sector for Asia, Microsoft .

Saw said that, on one hand the government needs to adopt better public services. Because, he said, people now more vocal in demanding a public transparency, participation and accountability. In addition, he continued, threats and national security has become more complex.

Therefore, Saw saw the public sector actually have an interest in applying cloud computing. "Currently, the development of data is a major challenge for the public sector," he revealed.

Microsoft, he said, continues to garner work with governments to help understand all of this data. "Cloud became attractive proposition for the data than are sensitive because it offers unlimited processing capabilities," said Saw.

Although, Saw admitted, the government can resistant to Cloud Computing technology. "The various challenges to the cloud rotates around political issues and sovereignty rather than the technical details, but ultimately we believe the economy is too interesting for the public sector to ignore it. source : detikinet
Read More..

Kamis, 05 Maret 2009

What is Software Cracking

Software cracking is the modification of software to remove protection methods: copy protection, trial/demo version, serial number, hardware key, date checks, CD check or software annoyances like nag screens and adware.

The distribution and use of cracked copies is illegal in almost every developed country. There have been many lawsuits over cracking software, but most had to do with the distribution of the duplicated product rather than the process of defeating the protection, due to the difficulty of constructing legally sound proof of individual guilt in the latter instance. In the United States, the passing of the Digital Millennium Copyright Act (DMCA) legislation made software cracking, as well as the distribution of information which enables software cracking, illegal. However, the law has hardly been tested in the U.S. judiciary in cases of reverse engineering for personal use only. The European Union passed the European Union Copyright Directive in May 2001, making software copyright infringement illegal in member states once national legislation has been enacted pursuant to the directive.

History

The first software copy protection was on early Apple II, Atari 800 and Commodore 64 software. Game publishers, in particular, carried on an arms race with software crackers. Over time, publishers have resorted to increasingly complex countermeasures to try to stop unauthorized copying of their software.

Unlike modern computers that use standardized drivers to manage device communications, the Apple II DOS directly controlled the step motor that moves the floppy drive head, and also directly interpreted the raw data (known as nibbles) read from each track to find the data sectors. This allowed complex disk-based software copy protection, by storing data on half tracks (0 1 2.5 3.5 5 6...), quarter tracks (0 1 2.25 3.75 5 6...), and any combination thereof. In addition tracks did not need to be perfect rings, but could be sectioned so that sectors could be staggered across overlapping offset tracks, the most extreme version being known as spiral tracking. It was also discovered that many floppy drives do not have a fixed upper limit to head movement, and it was sometimes possible to write an additional 36th track above the normal 35 tracks. The standard Apple II DOS copy programs could not read such protected floppy disks, since the standard DOS assumed all disks had a uniform 35 track, 13 or 16 sector layout. Special nibble-copy programs such as Locksmith and Copy II Plus could sometimes duplicate these disks by using a reference library of known protection methods, but when protected programs were cracked they would be completely stripped of the copy protection system, and transferred onto a standard DOS disk that any normal Apple II DOS copy program could read.

One of the primary routes to hacking these early copy protections was to run a program that simulates the normal CPU operation. The CPU simulator provides a number of extra features to the hacker, such as the ability to single-step through each processor instruction and to examine the CPU registers and modified memory spaces as the simulation runs. The Apple II provided a built-in opcode disassembler, allowing raw memory to be decoded into CPU opcodes, and this would be utilized to examine what the copy-protection was about to do next. Generally there was little to no defense available to the copy protection system, since all its secrets are made visible through the simulation. But because the simulation itself must run on the original CPU, in addition to the software being hacked, the simulation would often run extremely slowly even at maximum speed.

The most common protection method on the Atari computers were "bad sectors". These were sectors on the disk that were intentionally unreadable by the disk drive. The software would look for these sectors when the program was loading and would stop loading if an error code was not returned when accessing these sectors. Special copy programs were available that would copy the disk and remember any bad sectors. The user could then use an application to spin the drive by constantly reading a single sector and display the drive RPM. With the disk drive top removed a small screwdriver could be used to slow the drive RPM below a certain point. Once the drive was slowed down the application could then go and write "bad sectors" where needed. When done the drive RPM was sped up back to normal and an uncracked copy was made. Of course cracking the software to expect good sectors made for readily copied disks without the need to meddle with the disk drive. As time went on more sophisticated methods were developed, but almost all involved some form of malformed disk data, such as a sector that might return different data on separate accesses due to bad data alignment. Products such as the "Happy Chip" became available that were hardware add-ons similar to today's game console modchips. However, the Happy Chip would allow the user to make exact copies of the original program with copy protections in place on the new disk. "Happy Chip" owners quickly became popular in game trading circles.

On the Commodore 64, several methods were used to copy protect software. For software distributed on ROM cartridges, subroutines were created that attempted to write to the ROM. If nothing happened, the presence of a ROM cartridge was verified, but if the software had been moved to RAM, the write routine would disable the software. Because of the operation of Commodore floppy drives, some write protection schemes would cause the floppy drive head to bang against its top and could cause the drive head to become misaligned. In some cases, cracked versions of software were desirable to avoid this result.

Most of the early software crackers were computer hobbyists who often formed groups that competed against each other in the cracking and spreading of software. Breaking a new copy protection scheme as quickly as possible was often regarded as an opportunity to demonstrate one's technical superiority rather than a possibility of money-making. The cracker groups of the 1980s started to advertise themselves and their skills by attaching animated screens known as crack intros in the software programs they cracked and released. Once the technical competition had expanded from the challenges of cracking to the challenges of creating visually stunning intros, the foundations for a new subculture known as demoscene were established. Demoscene started to separate itself from the illegal "warez scene" during the 1990s and is now regarded as a completely different subculture. Many software crackers have later grown into extremely capable software reverse engineers; the deep knowledge of assembly required in order to crack protections enables them to reverse engineer drivers in order to port them from binary-only drivers for Windows to drivers with source code for Linux and other free operating systems.

With the rise of the Internet, software crackers developed secretive online organizations.

Most of the elite, or well known cracking groups make software cracks entirely for respect in "The Scene", not Profit. From there, the cracks are eventually leaked onto public internet sites by people/crackers who use the well protected/secure FTP release archives, and are made into pirated copies and sold illegally by other third parties.

The Scene today is formed of small groups of very talented people, who more or less compete to have the more genius crackers, and methods of cracking and reverse engineering.

Methods

The most common software crack is the modification of an application's binary to cause or prevent a specific key branch in the program's execution. This is accomplished by reverse engineering the compiled program code using a debugger such as SoftICE, OllyDbg, GDB, or MacsBug until the software cracker reaches the subroutine that contains the primary method of protecting the software (or by disassembling an executable file with a program such as IDA). The binary is then modified using the debugger or a hex editor in a manner that replaces a prior branching opcode with its complement or a NOP opcode so the key branch will either always execute a specific subroutine or skip over it. Almost all common software cracks are a variation of this type. Proprietary software developers are constantly developing techniques such as code obfuscation, encryption, and self-modifying code to make this modification increasingly difficult.

A specific example of this technique is a crack that removes the expiration period from a time-limited trial of an application. These cracks are usually programs that patch the program executable and sometimes the .dll or .so linked to the application. Similar cracks are available for software that requires a hardware dongle. A company can also break the copy protection of programs that they have legally purchased but that are licensed to particular hardware, so that there is no risk of downtime due to hardware failure (and, of course, no need to restrict oneself to running the software on bought hardware only).

Another method is the use of special software such as CloneCD to scan for the use of a commercial copy protection application. After discovering the software used to protect the application, another tool may be used to remove the copy protection from the CD or DVD. This may enable another program such as Alcohol 120%, CloneDVD, Game Jackal, or Daemon Tools to copy the protected software to a user's hard disk. Popular commercial copy protection applications which may be scanned for include SafeDisc and StarForce. [1]

In other cases, it might be possible to decompile a program in order to get access to the original source code or code on a level higher than machine code. This is often possible with scripting languages and languages utilizing JIT compilation. An example is cracking (or debugging) on the .NET platform where one might consider manipulating CIL to achieve one's needs. Java's bytecode also works in a similar fashion in which there is an intermediate language before the program is compiled to run on the platform dependent machine code.

Advanced Reverse engineering for Protections such as Securom, Safedisc or StarForce requires a Cracker, or many Crackers to spend much time studying the Protection, eventually finding every flaw within the Protection Code, and then coding their own tools to "Unwrap" the Protection automatically from Executable (.EXE) and Library (.DLL) files.

There are a number of sites on the Internet that let users download cracks for popular games and applications (although at the danger of acquiring malicious software that is sometimes distributed via such sites). Although these cracks are used by legal buyers of software they can also be used by people who have downloaded or otherwise obtained pirated software (often through P2P networks and torrent trackers).

Effects

The most visible and controversial effect of software cracking is the releasing of fully operable proprietary software without any copy protection. Software companies represented by the Business Software Alliance estimate and claim losses due to piracy.

Cracking has also been a significant factor in the domination of companies such as Adobe Systems and Microsoft, as these companies and others have benefited from piracy since the 1980s.[citation needed] Vast numbers of college and high school students adopted readily available applications from these companies. Many of these students would then go on to use them in their professional lives, purchasing legitimate licenses for business use and introducing the software to others until the programs became ubiquitous.[2]

Industry Response

Apple Computer has begun incorporating a Trusted Platform Module into their Apple Macintosh line of computers, and making use of it in such applications as Rosetta. Parts of the operating system not fully x86-native run through the Rosetta PowerPC binary translator, which in turn requires the Trusted Platform Module for proper operation. (This description applies to the developer preview version, but the mechanism differs in the release version.) Recently, the OSx86 project has been releasing patches to circumvent this mechanism. There are also industrial solutions available like Matrix Software License Protection System.

Microsoft reduced common Windows based software cracking with the release of the NGSCB initiative in future versions of their operating system.

Source: Computer Cracking
Read More..

What is Computer Hacking

Computer hacking is the practice of modifying computer hardware and software to accomplish a goal outside of the creator’s original purpose. People who engage in computer hacking activities are often called hackers. Since the word “hack” has long been used to describe someone who is incompetent at his/her profession, some hackers claim this term is offensive and fails to give appropriate recognition to their skills.

Computer hacking is most common among teenagers and young adults, although there are many older hackers as well. Many hackers are true technology buffs who enjoy learning more about how computers work and consider computer hacking an “art” form. They often enjoy programming and have expert-level skills in one particular program. For these individuals, computer hacking is a real life application of their problem-solving skills. It’s a chance to demonstrate their abilities, not an opportunity to harm others.

Since a large number of hackers are self-taught prodigies, some corporations actually employ computer hackers as part of their technical support staff. These individuals use their skills to find flaws in the company’s security system so that they can be repaired quickly. In many cases, this type of computer hacking helps prevent identity theft and other serious computer-related crimes.

Computer hacking can also lead to other constructive technological developments, since many of the skills developed from hacking apply to more mainstream pursuits. For example, former hackers Dennis Ritchie and Ken Thompson went on to create the UNIX operating system in the 1970s. This system had a huge impact on the development of Linux, a free UNIX-like operating system. Shawn Fanning, the creator of Napster, is another hacker well known for his accomplishments outside of computer hacking.

In comparison to those who develop an interest in computer hacking out of simple intellectual curiosity, some hackers have less noble motives. Hackers who are out to steal personal information, change a corporation’s financial data, break security codes to gain unauthorized network access, or conduct other destructive activities are sometimes called “crackers.” This type of computer hacking can earn you a trip to a federal prison for up to 20 years.

If you are interested in protecting your home computer against malicious hackers, investing in a good firewall is highly recommended. It’s also a good idea to check your software programs for updates on a regular basis. For example, Microsoft offers a number of free security patches for its Internet Explorer browser.

Source: Computer Hacking
Read More..

Who is Programmer

A programmer is someone who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (Lisp, Java, Delphi, C++, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. A programmer is not a software developer, software engineer, computer scientist, or software analyst. These professions typically refer to individuals possessing programming skills as well as other software engineering skills. For this reason, the term programmer is sometimes considered an insulting or derogatory oversimplification of these other professions. This has sparked much debate amongst developers, analysts, computer scientists, programmers, and outsiders who continue to be puzzled at the subtle differences in these occupations.

Those proficient in computer programming skills may become famous, though this regard is normally limited to software engineering circles. Many of the most notable programmers are often labeled hackers. Programmers often have or project an image of individualist geekdom, resistance to "suits" (referring to both business suits literally and figuratively to the "Establishment"), controls and conformity.

Ada Lovelace is popularly credited as history's first programmer. She was the first to express an algorithm intended for implementation on a computer, Charles Babbage's analytical engine, in October 1842. Her work never ran, though that of Konrad Zuse did in 1941. The ENIAC programming team, consisting of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman were the first working programmers.

International Programmers' Day is celebrated annually on January 7th.

Nature of the work

Some of this section is from the Occupational Outlook Handbook, 2006–07 Edition, which is in the public domain as a work of the United States Government.

Computer programmers write, test, debug, and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.

Programmers work in many settings, including corporate information technology departments, big software companies, and small service firms. Many professional programmers also work for consulting companies at client' sites as contractors. Licensing is not typically required to work as a programmer, although professional certifications are commonly held by programmers. Programming is widely considered a profession (although some authorities disagree on the grounds that only careers with legal licensing requirements count as a profession).

Programmers' work varies widely depending on the type of business they are writing programs for. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Although simple programs can be written in a few hours, programs that use complex mathematical formulas whose solutions can only be approximated or that draw data from many existing systems may require more than a year of work. In most cases, several programmers work together as a team under a senior programmer’s supervision.

Programmers write programs according to the specifications determined primarily by more senior programmers and by systems analysts. After the design process is complete, it is the job of the programmer to convert that design into a logical series of instructions that the computer can follow. The programmer codes these instructions in one of many programming languages. Different programming languages are used depending on the purpose of the program. COBOL, for example, is commonly used for business applications which are run on mainframe and midrange computers, whereas Fortran is used in science and engineering. C++ is widely used for both scientific and business applications. Java, C# and PHP are popular programming languages for Web and business applications. Programmers generally know more than one programming language and, because many languages are similar, they often can learn new languages relatively easily. In practice, programmers often are referred to by the language they know, e.g. as Java programmers, or by the type of function they perform or environment in which they work: for example, database programmers, mainframe programmers, or Web developers.

When making changes to the source code that programs are made up of, programmers need to make other programmers aware of the task that the routine is to perform. They do this by inserting comments in the source code so that others can understand the program more easily. To save work, programmers often use libraries of basic code that can be modified or customized for a specific application. This approach yields more reliable and consistent programs and increases programmers' productivity by eliminating some routine steps.

Testing and debugging

Programmers test a program by running it and looking for bugs. As they are identified, the programmer usually makes the appropriate corrections, then rechecks the program until an acceptably low level and severity of bugs remain. This process is called testing and debugging. These are important parts of every programmer's job. Programmers may continue to fix these problems throughout the life of a program. Updating, repairing, modifying, and expanding existing programs sometimes called maintenance programmer. Programmers may contribute to user guides and online help, or they may work with technical writers to do such work.

Certain scenarios or execution paths may be difficult to test, in which case the programmer may elect to test by inspection which involves a human inspecting the code on the relevant execution path, perhaps hand executing the code. Test by inspection is also sometimes used as a euphemism for inadequate testing. It may be difficult to properly assess whether the term is being used euphemistically.

Application versus system programming

Computer programmers often are grouped into two broad types: application programmers and systems programmers. Application programmers write programs to handle a specific job, such as a program to track inventory within an organization. They also may revise existing packaged software or customize generic applications which are frequently purchased from independent software vendors. Systems programmers, in contrast, write programs to maintain and control computer systems software, such as operating systems and database management systems. These workers make changes in the instructions that determine how the network, workstations, and CPU of the system handle the various jobs they have been given and how they communicate with peripheral equipment such as printers and disk drives. Because of their knowledge of the entire computer system, systems programmers often help applications programmers debug, or determine the source of, problems that may occur with their programs.

Types of software

Programmers in software development companies may work directly with experts from various fields to create software — either programs designed for specific clients or packaged software for general use — ranging from computer and video games to educational software to programs for desktop publishing and financial planning. Programming of packaged software constitutes one of the most rapidly growing segments of the computer services industry.

In some organizations, particularly small ones, workers commonly known as programmer analysts are responsible for both the systems analysis and the actual programming work. The transition from a mainframe environment to one that is based primarily on personal computers (PCs) has blurred the once rigid distinction between the programmer and the user. Increasingly, adept end users are taking over many of the tasks previously performed by programmers. For example, the growing use of packaged software, such as spreadsheet and database management software packages, allows users to write simple programs to access data and perform calculations.

In addition, the rise of the Internet has made Web development a huge part of the programming field. More and more software applications nowadays are Web applications that can be used by anyone with a Web browser. Examples of such applications include the Google search service, the Hotmail e-mail service, and the Flickr photo-sharing service.
Read More..

What is Website

Ringkasan ini tidak tersedia. Harap klik di sini untuk melihat postingan. Read More..

Selasa, 03 Maret 2009

TIOBE Programming Community Index for February 2009

February Headline: RPG (AS/400) replaces COBOL in top 20

The TIOBE Programming Community index gives an indication of the popularity of programming languages. The index is updated once a month. The ratings are based on the number of skilled engineers world-wide, courses and third party vendors. The popular search engines Google, MSN, Yahoo!, and YouTube are used to calculate the ratings. Observe that the TIOBE index is not about the best programming language or the language in which most lines of code have been written.

The index can be used to check whether your programming skills are still up to date or to make a strategic decision about what programming language should be adopted when starting to build a new software system. The definition of the TIOBE index can be found here.


Source: Top 20 Language Programming Read More..

Senin, 02 Maret 2009

What is Language Programming

A programming language is a machine-readable artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that specify the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

Many programming languages have some form of written specification of their syntax and semantics, since computers require precisely defined instructions. Some are defined by a specification document (for example, an ISO Standard), while others have a dominant implementation (such as Perl).

The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as automated looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field [1], where many more are being created every year.

Definitions

Traits often considered important for constituting a programming language:

* Function: A programming language is a language used to write computer programs, which involve a computer performing some kind of computation[2] or algorithm and possibly control external devices such as printers, robots,[3] and so on.

* Target: Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines. Some programming languages are used by one device to control another. For example PostScript programs are frequently created by another program to control a computer printer or display.

* Constructs: Programming languages may contain constructs for defining and manipulating data structures or controlling the flow of execution.

* Expressive power: The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL and Charity are examples of languages that are not Turing complete, yet often called programming languages.[4][5]

Some authors restrict the term "programming language" to those languages that can express all possible algorithms;[6] sometimes the term "computer language" is used for more limited artificial languages.

Non-computational languages, such as markup languages like HTML or formal grammars like BNF, are usually not considered programming languages. A programming language (which may or may not be Turing complete) may be embedded in these non-computational (host) languages.

Usage

A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives). [7]

Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program.

Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language[citation needed].

Many languages have been designed from scratch, altered to meet new needs, combined with other languages, and eventually fallen into disuse. Although there have been attempts to design one "universal" computer language that serves all purposes, all of them have failed to be generally accepted as filling this role.[8] The need for diverse computer languages arises from the diversity of contexts in which languages are used:

* Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.
* Programmers range in expertise from novices who need simplicity above all else, to experts who may be comfortable with considerable complexity.
* Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.
* Programs may be written once and not change for generations, or they may undergo nearly constant modification.
* Finally, programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.

One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[9]

Natural language processors have been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as "foolish."[10] Alan Perlis was similarly dismissive of the idea.[11]

Elements

All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

Source: Language Programming
Read More..

What is Database

A database is a structured collection of records or data that is stored in a computer system. The structure is achieved by organizing the data according to a database model. The model in most common use today is the relational model. Other models such as the hierarchical model and the network model use a more explicit representation of relationships

Architecture

Depending on the intended use, there are a number of database architectures in use. Many databases use a combination of strategies. On-line Transaction Processing systems (OLTP) often use a row-oriented datastore architecture, while data-warehouse and other retrieval-focused applications like Google's BigTable, or bibliographic database (library catalogue) systems may use a Column-oriented DBMS architecture.

Document-Oriented, XML, knowledgebases, as well as frame databases and RDF-stores (aka triple-stores), may also use a combination of these architectures in their implementation.

Finally, it should be noted that not all databases have or need a database 'schema' (so called schema-less databases).

Over many years the database industry has been dominated by General Purpose database systems, which offer a wide range of functions that are applicable to many, if not most circumstances in modern data processing. These have been enhanced with extensible datatypes, pioneered in the PostgreSQL project, to allow a very wide range of applications to be developed.

There are also other types of database which cannot be classified as relational databases.

Database management systems

A computer database relies upon software to organize the storage of data. This software is known as a database management system (DBMS). Database management systems are categorized according to the database model that they support. The model tends to determine the query languages that are available to access the database. A great deal of the internal engineering of a DBMS, however, is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between products.

A Relational Database Management System (RDBMS) implements the features of the relational model outlined above. In this context, Date's "Information Principle" states: "the entire information content of the database is represented in one and only one way. Namely as explicit values in column positions (attributes) and rows in relations (tuples). Therefore, there are no explicit pointers between related tables."

Database models
Main article: Database model

Post-relational database models

Products offering a more general data model than the relational model are sometimes classified as post-relational. The data model in such products incorporates relations but is not constrained by the Information Principle, which requires that all information is represented by data values in relations.

Some of these extensions to the relational model actually integrate concepts from technologies that pre-date the relational model. For example, they allow representation of a directed graph with trees on the nodes.

Some products implementing such models have been built by extending relational database systems with non-relational features. Others, however, have arrived in much the same place by adding relational features to pre-relational systems. Paradoxically, this allows products that are historically pre-relational, such as PICK and MUMPS, to make a plausible claim to be post-relational in their current architecture.

Object database models

In recent years, the object-oriented paradigm has been applied to database technology, creating a new programming model known as object databases. These databases attempt to bring the database world and the application programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce the key ideas of object programming, such as encapsulation and polymorphism, into the world of databases.

A variety of these ways have been tried for storing objects in a database. Some products have approached the problem from the application programming end, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not have the ability to find objects based on their information content. Others have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities.

Database storage structures

Relational database tables/indexes are typically stored in memory or on hard disk in one of many forms, ordered/unordered flat files, ISAM, heaps, hash buckets or B+ trees. These have various advantages and disadvantages discussed further in the main article on this topic. The most commonly used are B+ trees and ISAM.

Object databases use a range of storage mechanisms. Some use virtual memory mapped files to make the native language (C++, Java etc.) objects persistent. This can be highly efficient but it can make multi-language access more difficult. Others break the objects down into fixed and varying length components that are then clustered tightly together in fixed sized blocks on disk and reassembled into the appropriate format either for the client or in the client address space. Another popular technique is to store the objects in tuples, much like a relational database, which the database server then reassembles for the client.

Other important design choices relate to the clustering of data by category (such as grouping data by month, or location), creating pre-computed views known as materialized views, partitioning data by range or hash. Memory management and storage topology can be important design choices for database designers as well. Just as normalization is used to reduce storage requirements and improve the extensibility of the database, conversely denormalization is often used to reduce join complexity and reduce execution time for queries. [1]

Indexing

All of these databases can take advantage of indexing to increase their speed. This technology has advanced tremendously since its early uses in the 1960s and 1970s. The most common kind of index is a sorted list of the contents of some particular table column, with pointers to the row associated with the value. An index allows a set of table rows matching some criterion to be located quickly. Typically, indexes are also stored in the various forms of data-structure mentioned above (such as B-trees, hashes, and linked lists). Usually, a specific technique is chosen by the database designer to increase efficiency in the particular case of the type of index required.

Most relational DBMS's and some object DBMSs have the advantage that indexes can be created or dropped without changing existing applications making use of it. The database chooses between many different strategies based on which one it estimates will run the fastest. In other words, indexes are transparent to the application or end-user querying the database; while they affect performance, any SQL command will run with or without index to compute the result of an SQL statement. The RDBMS will produce a plan of how to execute the query, which is generated by analyzing the run times of the different algorithms and selecting the quickest. Some of the key algorithms that deal with joins are nested loop join, sort-merge join and hash join. Which of these is chosen depends on whether an index exists, what type it is, and its cardinality.

An index speeds up access to data, but it has disadvantages as well. First, every index increases the amount of storage on the hard drive necessary for the database file, and second, the index must be updated each time the data are altered, and this costs time. (Thus an index saves time in the reading of data, but it costs time in entering and altering data. It thus depends on the use to which the data are to be put whether an index is on the whole a net plus or minus in the quest for efficiency.)

A special case of an index is a primary index, or primary key, which is distinguished in that the primary index must ensure a unique reference to a record. Often, for this purpose one simply uses a running index number (ID number). Primary indexes play a significant role in relational databases, and they can speed up access to data considerably.

Transactions and concurrency

In addition to their data model, most practical databases ("transactional databases") attempt to enforce a database transaction. Ideally, the database software should enforce the ACID rules, summarized here:

* Atomicity: Either all the tasks in a transaction must be done, or none of them. The transaction must be completed, or else it must be undone (rolled back).
* Consistency: Every transaction must preserve the integrity constraints — the declared consistency rules — of the database. It cannot place the data in a contradictory state.
* Isolation: Two simultaneous transactions cannot interfere with one another. Intermediate results within a transaction are not visible to other transactions.
* Durability: Completed transactions cannot be aborted later or their results discarded. They must persist through (for instance) restarts of the DBMS after crashes

In practice, many DBMSs allow most of these rules to be selectively relaxed for better performance.

Concurrency control is a method used to ensure that transactions are executed in a safe manner and follow the ACID rules. The DBMS must be able to ensure that only serializable, recoverable schedules are allowed, and that no actions of committed transactions are lost while undoing aborted transactions.

Replication

Replication of databases is closely related to transactions. If a database can log its individual actions, it is possible to create a duplicate of the data in real time. The duplicate can be used to improve performance or availability of the whole database system. Common replication concepts include:

* Master/Slave Replication: All write requests are performed on the master and then replicated to the slaves
* Quorum: The result of Read and Write requests are calculated by querying a "majority" of replicas.
* Multimaster: Two or more replicas sync each other via a transaction identifier.

Parallel synchronous replication of databases enables transactions to be replicated on multiple servers simultaneously, which provides a method for backup and security as well as data availability.

Security

Database security denotes the system, processes, and procedures that protect a database from unintended activity.

Security is usually enforced through access control, auditing, and encryption.

* Access control ensures and restricts who can connect and what can be done to the database.
* Auditing logs what action or change has been performed, when and by whom.
* Encryption: Since security has become a major issue in recent years, many commercial database vendors provide built-in encryption mechanisms. Data is encoded natively into the tables and deciphered "on the fly" when a query comes in. Connections can also be secured and encrypted if required using DSA, MD5, SSL or legacy encryption standard.

Enforcing security is one of the major tasks of the DBA.

In the United Kingdom, legislation protecting the public from unauthorized disclosure of personal information held on databases falls under the Office of the Information Commissioner. United Kingdom based organizations holding personal data in electronic format (databases for example) are required to register with the Data Commissioner.

Locking

Locking is how the database handles multiple concurrent operations. This is how concurrency and some form of basic integrity is managed within the database system. Such locks can be applied on a row level, or on other levels like page (a basic data block), extent (multiple array of pages) or even an entire table. This helps maintain the integrity of the data by ensuring that only one process at a time can modify the same data.

In basic filesystem files or folders, only one lock at a time can be set, restricting the usage to one process only. Databases, on the other hand, can set and hold mutiple locks at the same time on the different level of the physical data structure. How locks are set, last is determined by the database engine locking scheme based on the submitted SQL or transactions by the users. Generally speaking, no activity on the database should be translated by no or very light locking.

For most DBMS systems existing on the market, locks are generally shared or exclusive. Exclusive locks mean that no other lock can acquire the current data object as long as the exclusive lock lasts. Exclusive locks are usually set while the database needs to change data, like during an UPDATE or DELETE operation.

Shared locks can take ownership one from the other of the current data structure. Shared locks are usually used while the database is reading data, during a SELECT operation. The number, nature of locks and time the lock holds a data block can have a huge impact on the database performances. Bad locking can lead to disastrous performance response (usually the result of poor SQL requests, or inadequate database physical structure)

Default locking behavior is enforced by the isolation level of the dataserver. Changing the isolation level will affect how shared or exclusive locks must be set on the data for the entire database system. Default isolation is generally 1, where data can not be read while it is modified, forbidding to return "ghost data" to end user.

At some point intensive or inappropriate exclusive locking, can lead to the "dead lock" situation between two locks. Where none of the locks can be released because they try to acquire resources mutually from each other. The Database has a fail safe mechanism and will automatically "sacrifice" one of the locks releasing the resource. Doing so processes or transactions involved in the "dead lock" will be rolled back.

Databases can also be locked for other reasons, like access restrictions for given levels of user. Some databases are also locked for routine database maintenance, which prevents changes being made during the maintenance. See "Locking tables and databases" (section in some documentation / explanation from IBM) for more detail.) However, many modern databases don't lock the database during routine maintenance. e.g. "Routine Database Maintenance" for PostgreSQL.

Applications of databases

Databases are used in many applications, spanning virtually the entire range of computer software. Databases are the preferred method of storage for large multiuser applications, where coordination between many users is needed. Even individual users find them convenient, and many electronic mail programs and personal organizers are based on standard database technology. Software database drivers are available for most database platforms so that application software can use a common Application Programming Interface to retrieve the information stored in a database. Two commonly used database APIs are JDBC and ODBC.
Read More..

What is Information Technology

Information technology (IT), as defined by the Information Technology Association of America (ITAA), is "the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware." IT deals with the use of electronic computers and computer software to convert, store, protect, process, transmit, and securely retrieve information.

Today, the term information technology has ballooned to encompass many aspects of computing and technology, and the term has become very recognizable. The information technology umbrella can be quite large, covering many fields. IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases. A few of the duties that IT professionals perform may include data management, networking, engineering computer hardware, database and software design, as well as the management and administration of entire systems.

When computer and communications technologies are combined, the result is information technology, or "infotech". Information Technology (IT) is a general term that describes any technology that helps to produce, manipulate, store, communicate, and/or disseminate information. Presumably, when speaking of Information Technology (IT) as a whole, it is noted that the use of computers and information are associated.

The term Information Technology (IT) was coined by Jim Domsic of Michigan in November 1981.[citation needed] Domsic created the term to modernize the outdated phrase "data processing". Domsic at the time worked as a computer manager for an automotive related industry.

In recent years ABET and the ACM have collaborated to form accreditation and curriculum standards for degrees in Information Technology as a distinct field of study separate from both Computer Science and Information Systems. SIGITE is the ACM working group for defining these standards.

Source: Information Technology
Read More..