check others pages

Feb 25, 2011

Quantum Mechanics and Information



Quantum-mechanical thermodynamics is the heart of thermodynamics, accounting for such 
phenomena as spin-statistics, correct Boltzmann counting, and most of the interesting manybody system characteristics such as superfluidity and Fermi pressure. At its most basic level, to 
put this discipline in an information-theoretic context requires generalizing the idea of 
information. 
Because of the intricate correlations that quantum systems can possess, they are difficult to 
interpret within the bounds of classical information theory. But a complete theory which 
accounts for both the quantum and classical cases has been developed (Cer97b). The von 
Neumann entropy for a quantum system, represented by a density matrix ρA, where A is a 
quantum system or quantum source, 
H(A) = - A A
Trρ log ρ
is familiar to students of statistical mechanics for being a measure of the disorder of a quantum 
system (see the explanation in I.1.i.), but it was recently established that H(A) is the minimum 
number of qubits needed to losslessly code an ensemble of quantum states, thus giving 
information-theoretic importance to this representation – see the argument later in this section 
for a summary of the proof (Sch95). Further, if ρΑ is composed of orthogonal quantum states it 
clearly reduces to a classical Shannon entropy H(A), since we can just diagonalize ρA in the 
orthogonal basis; we will discuss this fact later in the context of quantum channels. For two 
operators A, B, the definitions of joint, conditional, and mutual entropy can be defined as 
follows, where ρAB is the joint density matrix:  
H (A, B) = −Tr{ρ AB
log ρ AB
{
H (A | B) = −Tr{ρ AB
log ρ A|B
{
I (A, B) = Tr{ρ AB
log ρ A;B
{
The definitions of ρA|B and ρA ;B are surprisingly subtle. They are given by 
AB
A B
σ
ρ

= 2
|
 ,
[ ( )]
n
n
AB
n
A B
n
A B
1/ 1/
;
lim

∞→
ρ = ρ ⊗ ρ ρ
respectively, where 
σAB = 1A ⊗ log2ρΑΒ - log2ρAB 
and
ρΑ = TrB{ρΑΒ}, ρΒ = TrA{ρΑΒ}. 
(As noted earlier, TrB means to take the partial trace of the density matrix; e.g., 
a a ab AB
a b
b
A
ρ ' = ∑ ρ ' . 
This is where the duality between projection operators and density matrices becomes useful.) 



The justification of these choices is basically by analogy: as ρΑΒ becomes diagonal, for example, 
the quantum conditional entropy converges to its classical value; also, ρA;B satisfies the classical 
identity I(A,B) = H(A)+H(B) - H(A,B). But ρA|B and ρA ;B are not true density matrices. For 
example, ρA|B may have eigenvalues greater than 1, when σAB has negative eigenvalues – in such 
situations one obtains negative conditional entropies! It has been shown that a separable density 
matrix ρΑΒ (i.e., one that can be expanded in the form  35
⊗ ∑
i
i
B
i
i A
a ρ ρ
and therefore can’t be told apart from a classical ensemble containing two completely decoupled 
systems) must have positive semidefinite σAB and thus positive conditional entropies (Cer97a),
but sufficiently entangled systems can indeed guarantee strange behavior like negative 
conditional entropies. And oddly enough, I(A, B) can exceed the entropy of each individual set of 
variables! The above definitions and generalizations thereof have been applied to quantum 
teleportation and superdense coding (i.e., where two classical bits are encoded in one qubit) in 
the paper (Cer97b). In this paper the authors treat entangled qubits as virtual conjugate particles 
e,  e (collectively called ebits, or entangled qubits), carrying opposite information content ( +1 
qubit each), and suggest that  e may be visualized as e going backwards through time! One can, 
for example, draw a Feynman diagram for ultradense coding (consumption of a shared entangled 
state, plus transmission of one qubit, is equivalent to transmission of two classical bits). Thus 
these ‘information quanta’ suddenly have many properties analogous to those of virtual particles 
in relativistic quantum field theory – in particular, only through interactions that result in 
classical bits may they become visible to the world. Importantly, this formalism allows an 
intuitive picture of conserved information flows. Ebits can be created simply by transmitting one 
bit of a Bell pair while keeping the other, but one cannot create a qubit directly from an ebit 
(some classical bits are required). An interesting point is that if the ebit used for teleportation or 
superdense coding is only partly entangled, then the transmission will appear noisy. 
The Schmidt decomposition, sometimes called the polar decomposition, is a way of representing 
an entangled pure state in the form 
i
d
i
i i
= ∑c a ⊗ b
=1
ψ
where the ci
 are positive real coefficients, and 
i
a , 
i
b are appropriately chosen orthonormal 
states of two subsystems A, B, which may perhaps be located far apart from one another 
(Ben95b) (Sch95). It is useful to note that local operations cannot increase the number of 
nonzero terms in the Schmidt decomposition; i.e., one cannot create entanglement with purely 
local operations (although by consuming other entangled bits, one can increase the entanglement 
of another subsystem). Local unitary operations on subsystem A can only change the 
eigenvectors of A (not the eigenvalues), and cannot affect B’s observations at all. As noted 
above, in this basis σAB is positive semidefinite. If A makes an observation of his part of the 
system, this is equivalent to him first tracing over the degrees of freedom of B’s subsystem 
(which results in an apparent mixed state of his subsystem, represented by a diagonal density 
matrix ρA in the Schmidt basis), then making an observation on this reduced density matrix: 
i
d
i
A B i i
Tr ∑c a a
=
= =
1
2
ρ ψ ψ
Then the entropy of entanglement is defined to be 

=
− = − = − =
d
i
A A B B i i
E Tr Tr c c
1
2 2
ρ log(ρ ) ρ log(ρ ) log( ) . 
Note that it is just the Shannon entropy of the squared density matrix coefficients in the Schmidt 
basis. E = 0 for a direct product, and E = 1 for a Bell state. It can be proven (Pop97) that this is 
the unique measure of entanglement for pure states. The argument, like Shannon’s definition of 36 
information, is based on two axioms: it is impossible to increase entanglement purely by local 
operations, and it is possible to reversibly transfer entanglement from some set of shared ebits to 
previously unentangled qubits (leaving the original ebits unentangled) (Ben96c). (The last 
statement is true asymptotically; one can reversibly transform k systems in one entangled state 
into n systems in pure singlet states only in the limit n,k →∞ ; in fact n/k is often asymptotically 
an  
irrational number, so due to incommensurability such reversible procedures can’t even exist 
except in the infinite limit!) Formally these two statements are akin to the Second Law of 
thermodynamics and the statement that all reversible heat engines are equally efficient, so the 
derivation of the entropy of entanglement is completely analogous to the derivation of entropy in 
thermodynamics! Thus, proceeding as in thermodynamics, we require the degree of 
entanglement to be extensive, we can measure the entanglement of an arbitrary state by 
transforming it into a set of singlet states (which are defined to have entanglement equal to 1), 
and so on. Unfortunately this definition is only good in the quantum analogy to the 
‘thermodynamic limit,’ meaning access to infinitely many pure ebits, which rarely occurs in real 
life. Also, nobody has come up with a measure of entanglement for a mixed state, since no on 
has exhibited a reversible way to convert a density matrix into pure states; candidates include the 
number of singlets that can be packed into a density matrix, or the number which can be 
extracted – sometimes these two values can be different (Ben96d). Therefore many other 
definitions may be more practical, if appropriately justified; it is difficult to imagine what 
physical manifestation this measure of mixed-state entanglement might take. Indeed, the recent 
literature indicates that even respectable scientists occasionally need reprimanding due to the 
ambiguity of dealing with issues as nonintuitive as entanglement (Unr). 
One interesting manifestation of these results is that two people with many partially-entangled 
qubits can increase the degree of entanglement of a few qubits to arbitrary purity, at the expense 
of the others. Note that this is not in contradiction of our above statements, since only the total
entanglement is required to be conserved. For a large number n of entangled pairs of two-state 
particles, each with entropy of entanglement E < 1, the yield of pure singlets goes like nE–
O(log(n)) (compare this to the pure state yield of bulk spin resonance quantum computation, 
below). The process of Schmidt projection is as follows: suppose that the initial system is in the 
product state  
(cos sin 2 2
(
1
1 1 i i
n
i
i i ∏ a b a b
=
ψ = θ + θ
which has 2
n
 terms, each with one of n + 1 distinct coefficients cos(θ)
n-k
sin(θ)
k
 We can treat .
these states as n + 1 orthogonal subspaces, labed by the power k of sin(θ) in the coefficient. A
and B project the state ψ onto one of these subspaces by making an observation, and obtaining 
some k; this collapses the state down into a maximally entangled state, which occupies a 2n!/(nk)!k! dimensional subspace of the original space; if lucky, one can get even more ebits than one 
started with (but the expected entanglement cannot be greater than before). It’s analogous to the 
method of taking a biased coin, say with probability of heads .3, and getting an unbiased random 
bit by flipping the coin and keeping sequences HT = 1 and TH = 0 (probability .21 each), and 
discarding TT and HH (probability .49 and .09, respectively). In our case, measuring a power k
symmetrizes our state by selecting all possible combinations with some equal probability 
(although we cannot actively choose that probability) and discarding all the states with 37
probabilities that differ from it. One can transform this new state formally into singlets as 
follows: measure k for each of m batches, each containing n entangled pairs, getting ki
, i = 1..m. 
Let Dm be the product of all the coefficients n!/ki
!(n-ki
)!, i = 1..m, and continue until Dm∈[2
l
 ,
2
l
(1+ε)], where ε is the desired maximum error in the entanglement, and l is some integer. Then 
A and B make a measurement on the system which projects it into one of two subspaces, one of 
dimension 2
l+1
 (which is in a maximally entangled state of two 2
l
-dimensional subsystems) and 
one of dimension 2(Dm –2
l
) (which discards all the entanglement). In the former case, we have 
again effectively symmetrized the state; in the latter, we lose everything. (Such is the risky 
nature of quantum engineering.) Finally, another round of Schmidt decomposition arranges the 
density matrix into a product of l singlets.  
Quantum channels, over which qubit messages are sent, are a useful abstraction. In classical 
information theory, a source A produces messages ai
 with probability p(ai
), and the fidelity of the 
coding system is the probability that the decoded message is the same as the transmitted one; in 
quantum information theory a quantum source codes each messages a from a source A into a 
signal state  M
a of a system M; the ensemble of messages is then representable by a density 
matrix  
ρ =  ∑
a
a
p a)π ( , where πa =  M M
a a , 
and the fidelity for a channel where ra is received when πa is transmitted is defined to be  
f  ( = ) )Tr( a a
a
∑ p a π r
(so that for perfect noiseless channels f = 1) (Sch95). Compare this to our definition of classical 
probability fidelity in I.1.ii.; we will unite these pictures in the next paragraph. Another possible 
definition of a quantum channel W, which lends itself more to problems involving noise, is to 
consider a channel to be a probability distribution on unitary transformations U which act upon 
Hsignal ⊗ Henvironment
; before reading a quantum channel, one must take a partial trace over 
Henvironment
, and one can define the fidelity as  
x W x
all  signals  x
min 
(Cal96). Note that there are subtleties for the quantum channel which one does not have to 
consider in the classical case: for example, one cannot copy an arbitrary quantum signal (the 
“no-cloning” theorem (Woo82)); the proof follows immediately from the linearity of quantum 
mechanics. Only if the signals are orthogonal can they be copied (i.e., measuring a system which 
is known to be in an eigenstate is trivial, and yields all the information available about the 
system). Fundamental to quantum communication is the idea of transposition, or placing a 
system Y in the same state as a system X (perhaps resetting X to a useless state in the process); 
unitary tranposition from system X to system Y can occur iff the states in X have the same inner 
products as the corresponding states in Y, e.g.  ax bx = ay by , which occurs iff the Hilbert 
space of Y is of dimension no less than the Hilbert space of X, for obvious reasons. If a message 
is unitarily transposed from a source A to a system M, which upon transmission is decoded into a 
message A’ (on an isomorphic quantum system) by the inverse of the unitary coding operator, 
A → M → A’
then we say M is the quantum channel for the communication. The question arises, when is the 
quantum channel M big enough for a particular quantum source A? 

QUANTUM COMPUTATION



Quantum Computation: Theory and Implementation






Quantum computation is a new field bridging many disciplines, including theoretical physics, 
functional analysis and group theory, electrical engineering, algorithmic computer science, and 
quantum chemistry. The goal of this thesis is to explore the fundamental knowledge base of 
quantum computation, to demonstrate how to extract the necessary design criteria for a 
functioning tabletop quantum computer, and to report on the construction of a portable NMR 
spectrometer which can characterize materials and hopefully someday perform quantum 
computation. This thesis concentrates on the digital and system-level issues. Preliminary data 
and relevant code, schematics, and technology are discussed. 



Why build a quantum computer? Because it’s not there, for one thing, and the theory of quantum 
computation, which far outstrips the degree of implementation as of today, suggests that a 
quantum computer would be a incredible machine to have around. It could factor numbers 
exponentially faster than any known algorithm, and it could extract information from unordered 
databases in square-root the number of instructions required of a classical computer. As 
computation is a nebulous field, the power of quantum computing to solve problems of 
traditional complexity theory is essentially unknown. A quantum computer would also have 
profound applications for pure physics. By their very nature, quantum computers would take 
exponentially less space and time than classical computers to simulate real quantum systems, and 
proposals have been made for efficiently simulating many-body systems using a quantum 
computer. A more exotic application is the creation of new states which do not appear in nature: 
for example, a highly entangled state experimentally unrealizable through other means, the 3-
qubit Greenberger-Horne-Zeilinger state  
000 + 111 , 
has been prepared on an NMR quantum computer with only 3 operations, a Y(π/2) pulse 
followed by 2 CNOTs, applied to the spins of trichloroethylene (Laf97). Computational 
approaches to NMR are interesting for their own sake: Warren has coupled spins as far as 
millimeters apart, taking advantage of the properties of a phenomenon known as zero-quantum 
coherence to make very detailed MRI images (War98). Performing fixed-point analyses on 
iterated schemes for population inversion can result in very broadband spin-flipping operators – 
in such a scheme, the Liouville space of operators is contracted towards a fixed point, so that 
operators that are imperfect (due to irregularities in the magnetic field, end effects of the 
stimulating coil, impurities in the sample, etc.) nevertheless result in near-perfect operations 
(Tyc85) (Fre79). These simple uses of nonlinear dynamics and the properties of solution have 
shed light on certain computational aspects of nuclear magnetic resonance. 
But performing a quantum computation as simple as factoring 15 will involve the creation and 
processing of dozens of coherences over extended periods of time. This presents an incredible 
engineering challenge, and may open up new understandings on both computation and quantum 
mechanics. 
Undoubtedly we will someday have quantum computers, if only to maintain progress in 
computation. It has been widely speculated that within a few decades, the incredible growth 
which characterizes today’s silicon age will come to a gradual halt, simply because there will be 
no more room left at the bottom: memory bits and circuit traces will essentially be single atoms 
on chips, the cost of a fabrication plant will be “the GNP of the planet,” and there will probably 
be lots of interesting problems to solve which require still more computing power. This power 
will have to come from elsewhere – that is, by stepping outside of the classical computation 
paradigm. 
  8 
Other fields related to quantum computing, which might be said to fall into 

‘quantum control,’ include quantum teleportation, quantum manipulation, and quantum 
cryptography. Quantum teleportation is an appealing way to send quantum states over long 
distances, something no amount of classical information can ever do (probably), and has been 
realized for distances exceeding 100 kilometers (Bra96). Quantum means of cryptography are 
unbreakable according to the laws of physics, and using quantum states to send messages 
guarantees that eavesdroppers can be detected (Lo99). 
  
Furthermore quantum computing joins two of the most abstruse and surprisingly nonintuitive 
subjects known to humans: computation and quantum mechanics. Both have the power to shock 
the human mind, the former with its strange powers and weaknesses, and the latter with its 
nonintuitive picture of nature. Perhaps the conjunction of these two areas will yield new insights 
into each. For the novice in this field, past overview papers on quantum computing include 
(Bar96) (Llo95) (Ste97). 
There are powerful alternatives to quantum computing which may have more immediate 
significance for the computing community. Incremental improvements in silicon fabrication 
(such as MEMS-derived insights, low-power and reversible design principles, copper wiring 
strategies, intergate trench isolation, and the use of novel dielectrics), reconfigurable computing 
and field programmable gate arrays, single-electron transistors, superconducting rapid singleflux quantum logic, stochastic and defect-tolerant computers, ballistically-transporting 
conductors, printable and wearable computers, dynamical systems capable of computation, 
memory devices consisting of quantum dots and wells, and many other technologies are vying 
for the forefront. Any of them, if lucky and well-engineered, could jump to the forefront and 
perhaps render the remainder obsolete, much like the alternatives to silicon in the heyday of 
Bardeen and Intel. It is more likely that these diverse technologies will find a variety of niches in 
the world, given the amazing number of problems that people want to solve nowadays. 
This paper begins with a comprehensive critique of much of the knowledge in the quantum 
computation community, since that is perhaps fundamental to understanding why and how 
quantum computers should be built. Part I.1. covers elementary quantum mechanics, information 
theory, and computation theory, which is a prerequisite for what follows. Much of this was 
written over the past year to teach the author how all of these pictures can contribute to a 
coherent picture of the physics of computation. Part I.1. can also be thought of an ensemble of 
collected material, in preparation for a book which delves into the theories and experiments 
behind the physics of computation; it is the opinion of the author that the definitive book on the 
physics of computation has not yet been written, and that the world needs one. It is noted that 
very little synthesis has been performed on the information here presented; most of the 
paragraphs are summaries of various different sources, assembled for easy reference and as an 
aid to thinking. Some of these notes were presented at (Bab98).



Feb 24, 2011

Feb 23, 2011

Information Technology Industry





Information technology (IT) is both a huge industry in itself, and the source of dramatic changes in business practices in all other sectors. The term IT covers a number of related disciplines and areas, from semiconductor design and production (also covered in the profile of the electronics sector), through hardware manufacture (mainframes, servers, PCs, and mobile devices), to software, data storage, backup and retrieval, networking, and, of course, the internet.
On top of this, there has been a convergence between IT and telephony, driven by transforming voice traffic from an analogue signal to a digital packet, indistinguishable from other data packets travelling through a computer network. IT in the leisure sector is already about enabling interaction with video, movies, and TV, and this trend is increasingly carrying into the business space.
Each of the major sub-areas in IT is itself capable of being divided into its component parts. Storage, for example, breaks down into disk drives, tape drives, and optical drives, and into attached storage and networked storage. PCs break down into utility-business desktop PCs, high-end work stations, and “extreme” gaming PCs for games enthusiasts—the computer and console games industry has already produced “blockbusters” that outsell top releases from Hollywood.
Software subdivides into numerous specialist areas, from relational database technologies to enterprise applications, to “horizontal” office applications characterized by Microsoft Office 2007, for example.
Somewhat off the main track of IT at present, but very much related to both increases in processor power, and to work in simulation and artificial intelligence, is the field of robotics. This lies outside the scope of this profile, but the linkages between robotics and IT are already transforming both manufacturing and defense.
In addition, the IT arena is characterized by a number of key trends and emerging technologies which, again, have the potential to transform the way businesses currently use IT, and carry out their operations. An example of a trend would be the outsourcing of IT services, such as desktop PC support, or whole IT-supported functions, such as accounts processing. An example of a technology trend would be virtualization. This refers to the ability of large servers to be subdivided into a number of virtual machines, which can be either virtual PCs or virtual servers.

Virtualization carries with it a number of benefits, including stopping what, at one stage, looked like an endless proliferation of servers inside companies. One large server can now be split into a number of virtual servers, enabling the organization to reduce the number of boxes it has to manage. Server virtualization should not be confused with another powerful trend, the creation of virtual environments inside the machine. The fact that desktop processors are now powerful enough to mimic real-world physics in computer space is transforming both design and entertainment.

All these trends have enabled the IT industry to continue to generate a strong demand for the next generation of servers, PCs, and laptops. However, in a recession, companies of all sizes generally postpone upgrading their IT, or implementing major IT projects that are not already in hand. This makes the sector vulnerable to downturns in the economy, and the current global downturn is already having a major impact on revenues in

 

Market Analysis

According to the IT market analysis firm, Gartner, worldwide server shipments and revenues saw double-digit declines in the fourth quarter of 2008. By comparison with the same quarter in 2007, shipment numbers declined by 11.7% while revenue dropped by 15.1%. Commenting on the figures, Heeral Kota, a senior research analyst at Gartner, said: “The weakening economic environment had a deep impact on server market revenues in the fourth quarter, as companies put a hold on spending across most market segments. Almost all segments exhibited similar behavior, as users sought to reduce costs and spending, deferring projects where possible.”
Gartner said that the fall in shipments and revenue was reported across all regions apart from Japan, which managed a 4.7% revenue increase. Europe, the Middle East, and Africa (EMEA) suffered the worst decline, with revenues falling by 20.6%. Even the emerging regions of Latin America and Asia-Pacific suffered, with declines of 12.5% and 14.8%, respectively. North America server revenue declined by 14.6%.
The scale of the IT server sector as an industry can be seen from the fact that IBM, the market leader, ended 2008 with revenues of US$4 billion from server shipments, with almost exactly one-third of the global market. However, IBM saw revenues decline by 17.4% as a result of the downturn. Hewlett-Packard was next, with revenues of just under US$4 billion and with a 30% market share.
The figures in server shipments for 2008 chart the impact of the downturn fairly starkly. The sector had been enjoying fairly strong results during the first half of 2008, but a severe decline in sales set in as the intensity of the downturn began to bite, Gartner said.
If things are bleak on the server front, the outlook is just as bad for PCs. Gartner is predicting that the PC industry will suffer its sharpest unit decline in history in 2009. Gartner expects some 257 million PCs to ship worldwide through 2009. This would represent an 11.9% contraction on the numbers sold in 2008. Even after the dot.com bubble burst in 2001, global PC unit shipments only contracted 3.2%.
To view these statistics in perspective, it is important to remember that setting up a new chip-fabrication plant to make the next generation of PCs costs some US$3 billion. With margins on PCs being at an all-time low, it is very difficult for the industry to sustain itself if companies and households stop upgrading to the latest generation of PC.
According to Gartner, both developed market economies and emerging markets are forecast to go through tremendous slowdowns. After the telecoms and dot.com crash in 2001, sales of PCs in mature markets contracted by 7.9%, Gartner says, while sales growth in emerging markets slowed to 11.1% in 2002. Both these low points will be substantially exceeded in 2009. The impact will be deepened by hardware suppliers, who will act prudently and maintain inventories at an all-time low to avoid losses.
However, all is not total gloom. The trend for corporates and home users to switch to mobile PCs, rather than desktop units, will keep growth going for worldwide mobile PC shipments. Gartner is forecasting sales of 155.6 million units, up 9% from 2008. By way of contrast, desktop PC shipments will struggle to exceed 101 million, a drop of almost 32% on 2008. The most popular form in the mobile space will continue to be the mini-notebook, Gartner says. In particular, users are moving to higher-specification notebook PCs with larger screens, of around 8.9 inches. Prices, however, will continue to fall. Gartner is predicting that the price of a mid-specification mini-notebook PC with a large screen will fall, from an average of US$450 in 2008 to under US$400 by the end of 2009.
Another plus point is that the industry as a whole learned some valuable lessons in the crash of 2001, and is already demonstrating that it is much more agile, and better able to react to changing market conditions in 2009.
Not surprisingly, with all this bad news about slowing demand on actual “built” hardware, the downturn is also hitting demand for chip production. In fact, Gartner’s prediction here is that it will be at least 2013 before the semiconductor industry sees revenues comparable to those it achieved in 2008, when revenues peaked at US$256.4 billion. Over the course of 2009, the sector will see a drop in excess of 24%, with total global semiconductor revenues estimated to top out at US$194.5 billion. There is a precedent for this prediction, in that after the 2001 recession, the semiconductor industry took four years to get back to the revenues it had generated in 2000.
The contraction predicted for 2009 is considerably more gloomy than a prediction made by Gartner six months ago, when it was only predicting a contraction for the sector of 16%. On the plus side, modest single-digit growth should return in 2010.
Apart from semiconductor chip manufacturers, the other huge area in the field is memory chips, or, more specifically, DRAM chips. According to Gartner, DRAM suppliers lost more than US$13 billion in 2007 and 2008, due to massive overcapacity in the market and soft pricing. But many suppliers are now reducing supply, which should push DRAM prices back up, and put the industry on a better footing.
While the industry is absorbing all this bad news, there are positive trends that manufacturers, systems houses, value-added resellers, and consultancies can focus on. The move to replace tens or even hundreds of individual servers with large virtual servers is picking up pace, and is not going to be stopped by the recession. It is a cost-saver and efficiency driver, so companies will press ahead with virtualization programs. This, in turn, will drive sales of larger servers, and could drive applications upgrades as well. According to Gartner, worldwide virtualization software revenue will increase by 43%, from US$1.9 billion in 2008 to US$2.7 billion in 2009. Virtualization also plays well to the green agenda, and greening up IT by lowering its carbon footprint is another unstoppable trend for 2009 and 2010. Virtualization plays to this on a number of fronts. First, it is more power-efficient to run a single, large server than a number of smaller servers. Second, the manufacturing carbon footprint is lower, and, third, if the virtualization exercise extends to the desktop, then one server can replace dozens of PCs. Revenue from hosted virtual desktops (HVDs) is expected to more than triple, from US$74.1 million to US$298.6 million through 2009, Gartner says.
Storage systems in IT tend to be divided into external disk storage, where the disks are being “managed” in some way independently of processor resources, and attached storage, as in the typical PC or low-end server that comes with one or two hard drives already installed. According to the market analysis company, IDC, the worldwide external-disk storage market showed its first year-on-year fall for five years, for the last quarter of 2008. IDC reports a fall of 0.5%, with revenues totaling US$5.3 billion that quarter. Total disk-storage systems capacity shipped amounted to 2,460 petabytes, a growth of just 27.3% on the volume shipped in 2007. (The point here is that with e-mail and, now, live video, demand for storage should be vastly ahead of this figure).
Again on a positive note, one area where large and medium companies, as well as some service providers, can be expected to continue spending through the downturn is on the new IT concept known as “cloud computing.” IDC expects worldwide spending on cloud computing and cloud services to reach US$42 billion by 2012. Cloud computing is a term that essentially refers to the delivery of services to communities of users over the internet, instead of via a data centre located in the same building. Access to the service is via a web browser, and everything from storage to the processor power that drives the service is located remotely. The term itself comes from the way the internet is depicted in computer network diagrams (a non-specific “cloud”), and points to the fact that all the complexity of infrastructure that makes the service possible is hidden “in the cloud.”
According to a survey IDC conducted with almost 700 IT executives across the Asia-Pacific region, some 11% said they were already using cloud-based solutions. A further 41% indicated that they are either evaluating cloud-based solutions, or are piloting such solutions. Gartner, on the other hand, claims that cloud-computing application infrastructure technologies will still need some seven years to mature. It sees three phases of evolution for cloud computing going up to 2015 and beyond. Up to 2011, applications will be mostly opportunistic and tactical in nature, and will have little impact on mainstream IT, Gartner argues. By 2015, however, it expects cloud computing to have been commoditized, and to have become the preferred solution for many kinds of corporate applications that are now run in-house on standard IT equipment.
In summary, the immediate future for IT looks like being a period of tough belt-tightening. However, the underlying innovation in the sector, and its ability to transform mainstream business processes while enabling new kinds of business practice is undiminished, and should re-emerge to drive revenue growth once the global upturn starts to gain momentum.

Statewide Solutions to Address Information Technology Industry Workforce Needs

The U.S. Department of Labor has announced a series of investments totaling more than $7.8 million to address the workforce needs of the information technology industry. In preparing for these investments, the U.S. Department of Labor hosted forums with IT industry leaders, educators, and the public workforce system.
On February 26, 2004, ETA convened an IT Industry Executive Forum at CompTIA Headquarters in Oakbrook Terrace, IL. Executives representing 18 companies from sectors such as IT hardware, software, cross-industry end users, and service providers discussed a wide range of workforce issues concerning the information technology industry including the role for government in the IT industry's workforce initiatives, the need to develop the workforce soft skills, transferable IT skills, and the development of an industry competency model.
DOL has sought to understand and implement industry-identified strategies to confront critical workforce shortages. It has listened to employers, industry association representatives, and others associated with the information technology industry regarding some of their efforts to identify challenges and implement effective workforce strategies. DOL's Employment and Training Administration is supporting comprehensive business, education, and workforce development partnerships that have developed innovative approaches that address the workforce needs of business while also effectively helping workers find good jobs with good wages and promising career pathways in the information technology industry.
This set of workforce solutions is based on the information technology industry's priorities that address issues such as:
  • Over 90% of IT workers are performing jobs outside the IT industry, therefore it is necessary to have both IT training and complementary training in a respective business sector such as health care, manufacturing, and financial services.
  • Government may serve as an honest broker for specific issues such as promotion and image, forecasting the future of the workforce and their training needs.
  • Educators should expose kids to the new dynamic, global workplace and teach more about today's business culture.
  • Incumbent worker training has helped retain workers.
  • When recruiting for high-end jobs, the industry sees a need to develop soft skills, as well as transferable "umbrella skills."
The grants are intended to provide genuine solutions, leadership, and models for partnerships that can be replicated in different parts of the country.

Feb 20, 2011

BRIEF ABOUT COMPUTER SCIENCE

DEFINITION ABOUT COMPUTER SCIENCE



Computer science or computing science (abbreviated CS) is the study of the theoretical foundations of information andcomputation and of practical techniques for their implementation and application in computer systems.
 Computer scientists inventalgorithmic processes that create, describe, and transform information and formulate suitable abstractions to model complex systems.
Computer science has many sub-fields; some, such as computational complexity theory, study the properties of computational problems, while others, such as computer graphics, emphasize the computation of specific results. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describe computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.
The general public sometimes confuses computer science with careers that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones.[3]

The early foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard designed the first mechanical calculator in 1623, but did not complete its construction.[4] Blaise Pascaldesigned and constructed the first working mechanical calculator, the Pascaline, in 1642. Charles Babbage designed a difference engine and then a general-purposeAnalytical Engine in Victorian times[5], for which Ada Lovelace wrote a manual. Because of this work she is regarded today as the world's first programmer.[6] Around 1900,punch-card machines[7] were introduced. However, these machines were constrained to perform a single task, or at best some subset of all possible tasks.
During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.[8] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to studycomputation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[9][10] The first computer science degree program in the United States was formed at Purdue University in 1962.[11] Since practical computers became available, many applications of computing have become distinct areas of study in their own right.

Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[12] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704 and later the IBM 709 computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating...if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".[12] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.
Time has seen significant improvements in the usability and effectiveness of computer science technology. Modern society has seen a significant shift from computers being used solely by experts or professionals to a more widespread user base. Initially, computers were quite costly, and for their most-effective use, some degree of human aid was needed, in part by professional computer operators. However, as computers became widespread and far more affordable, less human assistance was needed, although residues of the original assistance still remained.

Applied computer science

Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in theCommunications of the ACM – turingineerturologistflow-charts-manapplied meta-mathematician, and applied epistemologist.[23] Three months later in the same journal,comptologist was suggested, followed next year by hypologist.[24] The term computics has also been suggested.[25] In continental Europe, names such as informatique(French), Informatik (German) or informatica (Dutch), derived from information and possibly mathematics or automatic, are more common than names derived from computer/computation.
The renowned computer scientist Edsger Dijkstra stated, "Computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, statistics, and economics.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[9] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[26]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.