DNA (Deoxyribonucleic acid) is a nucleic acid that contains
the genetic instructions used in the development and functioning of all known
living organisms (with the exception of RNA viruses). The DNA segments that
carry this genetic information are called genes, but other DNA sequences have
structural purposes, or are involved in regulating the use of this genetic
information. Along with RNA and proteins, DNA is one of the three major
macromolecules that are essential for all known forms of life.DNA consists of
two long polymers of simple units called nucleotides, with backbones made of
sugars and phosphate groups joined by ester bonds. These two strands run in
opposite directions to each other and are therefore anti-parallel. Attached to
each sugar is one of four types of molecules called nucleobases (informally,
bases). It is the sequence of these four nucleobases along the backbone that
encodes information. This information is read using the genetic code, which
specifies the sequence of the amino acids within proteins. The code is read by
copying stretches of DNA into the related nucleic acid RNA in a process called
transcription.
Within cells DNA is organized into long structures called
chromosomes. During cell division these chromosomes are duplicated in the
process of DNA replication, providing each cell its own complete set of
chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store
most of their DNA inside the cell nucleus and some of their DNA in organelles,
such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and
archaea) store their DNA only in the cytoplasm. Within the chromosomes,
chromatin proteins such as histones compact and organize DNA. These compact
structures guide the interactions between DNA and other proteins, helping
control which parts of the DNA are transcribed.
Architecture
DNA is a long polymer made from repeating units called
nucleotides. As first discovered by James D. Watson and Francis Crick, the
structure of DNA of all species comprises two helical chains each coiled round
the same axis, and each with a pitch of 34 Ångströms (3.4 nanometres) and a
radius of 10 Ångströms (1.0 nanometres). According to another study, when
measured in a particular solution, the DNA chain measured 22 to 26 Ångströms
wide (2.2 to 2.6 nanometres), and one nucleotide unit measured 3.3 Å (0.33 nm)
long. Although each individual repeating unit is very small, DNA polymers can
be very large molecules containing millions of nucleotides. For instance, the
largest human chromosome, chromosome number 1, is approximately 220 million
base pairs long.
In living organisms DNA does not usually exist as a single
molecule, but instead as a pair of molecules that are held tightly together.
These two long strands entwine like vines, in the shape of a double helix. The
nucleotide repeats contain both the segment of the backbone of the molecule,
which holds the chain together, and a nucleobase, which interacts with the
other DNA strand in the helix. A nucleobase linked to a sugar is called a
nucleoside and a base linked to a sugar and one or more phosphate groups is
called a nucleotide. Polymers comprising multiple linked nucleotides (as in
DNA) are called a polynucleotide.The backbone of the DNA strand is made from
alternating phosphate and sugar residues.[10] The sugar in DNA is
2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined
together by phosphate groups that form phosphodiester bonds between the third
and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a
strand of DNA has a direction. In a double helix the direction of the
nucleotides in one strand is opposite to their direction in the other strand:
the strands are antiparallel. The asymmetric ends of DNA strands are called the
5′ (five prime) and 3′ (three prime) ends, with the 5' end having a terminal
phosphate group and the 3' end a terminal hydroxyl group. One major difference
between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced
by the alternative pentose sugar ribose in RNA.
The DNA double helix is stabilized primarily by two forces:
hydrogen bonds between nucleotides and base-stacking interactions among the
aromatic nucleobases. In the aqueous environment of the cell, the conjugated π
bonds of nucleotide bases align perpendicular to the axis of the DNA molecule,
minimizing their interaction with the solvation shell and therefore, the Gibbs
free energy. The four bases found in DNA are adenine (abbreviated A), cytosine
(C), guanine (G) and thymine (T). These four bases are attached to the
sugar/phosphate to form the complete nucleotide, as shown for adenosine
monophosphate.The nucleobases are classified into two types: the purines, A and
G, being fused five- and six-membered heterocyclic compounds, and the
pyrimidines, the six-membered rings C and T. A fifth pyrimidine nucleobase, uracil
(U), usually takes the place of thymine in RNA and differs from thymine by
lacking a methyl group on its ring. Uracil is not usually found in DNA,
occurring only as a breakdown product of cytosine. In addition to RNA and DNA a
large number of artificial nucleic acid analogues have also been created to
study the proprieties of nucleic acids, or for use in biotechnology.
Base pairing
In a DNA double helix, each type of nucleobase on one strand
normally interacts with just one type of nucleobase on the other strand. This
is called complementary base pairing. Here, purines form hydrogen bonds to
pyrimidines, with a bonding only to T, and C bonding only to G. This
arrangement of two nucleotides binding together across the double helix is
called a base pair. As hydrogen bonds are not covalent, they can be broken and
rejoined relatively easily. The two strands of DNA in a double helix can
therefore be pulled apart like a zipper, either by a mechanical force or high
temperature. As a result of this complementarity, all the information in the
double-stranded sequence of a DNA helix is duplicated on each strand, which is
vital in DNA replication. Indeed, this reversible and specific interaction
between complementary base pairs is critical for all the functions of DNA in
living organisms.
The two types of base pairs form different numbers of
hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen
bonds (see figures, left). DNA with high GC-content is more stable than DNA
with low GC-content. Although it is often stated that this is due to the added
stability of an additional hydrogen bond, this is incorrect. [Citation needed]
DNA with high GC-content is more stable due to intra-strand base stacking
interactions.As noted above, most DNA molecules are actually two polymer
strands, bound together in a helical fashion by noncovalent bonds; this double
stranded structure (dsDNA) is maintained largely by the intrastrand base
stacking interactions, which are strongest for G,C stacks. The two strands can
come apart – a process known as melting – to form two ss DNA molecules. Melting
occurs when conditions favor ssDNA; such conditions are high temperature, low salt
and high pH (low pH also melts DNA, but since DNA is unstable due to acid
depurination, low pH is rarely used). The stability of the dsDNA form depends
not only on the GC-content (% G,C basepairs) but also on sequence (since
stacking is sequence specific) and also length (longer molecules are more
stable). The stability can be measured in various ways; a common way is the
"melting temperature", which is the temperature at which 50% of the
ds molecules are converted to ss molecules; melting temperature is dependent on
ionic strength and the concentration of DNA. As a result, it is both the
percentage of GC base pairs and the overall length of a DNA double helix that
determine the strength of the association between the two strands of DNA. Long
DNA helices with a high GC-content have stronger-interacting strands, while
short helices with high AT content have weaker-interacting strands. In biology,
parts of the DNA double helix that need to separate easily, such as the TATAAT
Pribnow box in some promoters, tend to have a high AT content, making the strands
easier to pull apart.
In the laboratory, the strength of this interaction can be
measured by finding the temperature required to break the hydrogen bonds, their
melting temperature (also called Tm value). When all the base pairs in a DNA
double helix melt, the strands separate and exist in solution as two entirely
independent molecules. These single-stranded DNA molecules (ssDNA) have no
single common shape, but some conformations are more stable than others.
Nanotechnology has emerged as a growing and rapidly changing field. New generations of nanomaterials will evolve, and with them new and possibly unforeseen environmental issues. It will be crucial that the Agency’s approaches to leveraging the benefits and assessing the impacts of nanomaterials continue to evolve in parallel with the expansion of and advances in these new technologies.Nanotechnology presents potential opportunities to create better materials and products. Already, nanomaterial-containing products are available in U.S. markets including coatings, computers, clothing, cosmetics, sports equipment and medical devices. A survey by EmTech Research of companies working in the field of nanotechnology has identified approximately 80 consumer products, and over 600 raw materials, intermediate components and industrial equipment items that are used by manufacturers.Nanotechnology also has the potential to improve the environment, both through direct applications of nanomaterials to detect, prevent, and remove pollutants, as well as indirectly by using nanotechnology to design cleaner industrial processes and create environmentally responsible products. However, there are unanswered questions about the impacts of nanomaterials and nanoproducts on human health and the environment, and the U.S. Environmental Protection Agency has the obligation to ensure that potential risks are adequately understood to protect human health and the environment. As products made from nanomaterials become more numerous and therefore more prevalent in the environment, EPA is thus considering how to best leverage advances in nanotechnology to enhance environmental protection, as well as how the introduction of nanomaterials into the environment will impact the Agency’s environmental programs, policies, research needs, and approaches to decision making. A nanometer is one billionth of a meter (10^-9 m)—about one hundred thousand times smaller than the diameter of a human hair, a thousand times smaller than a red blood cell, or about half the size of the diameter of DNA. Figure 1 illustrates the scale of objects in the nanometer range. For the purpose of this document, nanotechnology is defined as: research and technology development at the atomic, molecular, or macromolecular levels using a length scale of approximately one to one hundred nanometers in any dimension; the creation and use of structures, devices and systems that have novel properties and functions because of their small size; and the ability to control or manipulate matter on an atomic scale. This definition is based on part on the definition of nanotechnology used by the National Nanotechnology Initiative (NNI). Carbon-based materials- The nanomaterials are composed mostly of carbon, most commonly taking the form of a hollow spheres, ellipsoids, or tubes. Spherical and ellipsoidal carbon nanomaterials are referred to as fullerenes, while cylindrical ones are called nanotubes. These particles have many potential applications, including improved films and coatings, stronger and lighter materials, and applications in electronics. Figures 3, 4, and 5 show examples of carbon-based nanomaterials.
Metal-based materials- These nanomaterials include quantum dots, nanogold, nanosilver and metal oxides, such as titanium dioxide. A quantum dot is a closely packed semiconductor crystal comprised of hundreds or thousands of atoms, and whose size is on the order of a few nanometers to a few hundred nanometers. Changing the size of quantum dots changes their optical properties. Figures 6 and 7 show examples of metal-based nanomaterials.
Composites- combine nanoparticles with other nanoparticles or with larger, bulk-type materials. Nanoparticles, such as nanosized clays, are already being added to products ranging from auto parts to packaging materials, to enhance mechanical, thermal, barrier, and flame-retardant properties.The unique properties of these various types of intentionally produced nanomaterials give them novel electrical, catalytic, magnetic, mechanical, thermal, or imaging features that are highly desirable for applications in commercial, medical, military, and environmental sectors. These materials may also find their way into more complex nanostructures and systems.As new uses for materials with these special properties are identified, the number of products containing such nanomaterials and their possible applications continues to grow.
Researcher is with a scanning beam interference lithography
(SBIL) machine. This is used to create gratings and grids with structures on
the scale of a few nanometres (billionths of a metre). The gratings created on
this scale are used in astronomical telescopes such as the orbiting Chandra
X-ray telescope and the Solar and Heliospheric Observatory (SOHO) satellite.
SBIL uses a laser beam to create the pattern on the target surface. This allows
for very precise control over the pattern. The SBIL could have many uses in the
future as a source of nanotechnological components for computers and machines.
A research group of Dr. Carroll’s ranges from fundamental
investigations of transport phenomena in nano-scale objects (tests of quantum
mechanics in exotic topologies) to applications of nano-composite materials in
organic devices. The group has active programs in the growth of novel
nanostructures, manipulation and characterization of ordered assemblies of
nanostructures, and the integration of nanomaterials into both standard device
designs and novel quantum effect devices.The creation of novel new
nanomaterials is an essential part of the nano-sciences. These materials can
have exotic properties not normally found in nature. In fact, properties such
as super strength, ultra-high thermal conductivity, and super conductivity have
been observed for nano-systems when they are absent for the macro-counterparts
of the same element. In our studies, the extra-ordinary properties of
assemblies of nano-particles are used to test fundamental physical models,
develop new ultra-light, ultra-strong materials systems, and create technology
at the smallest length scales.
As an example, the carbon nanotube represents an interesting
and complicated topology for the confinement of charge carriers with a diameter
of only 1.4 nm and a length of microns. The molecular helicity, or chirality,
of the nanotube breaks a fundamental symmetry of the nanotube’s point group.
Their studies are examining the relationship of such symmetry breaking and the
accumulation of geometrical phase factors (Berry’s phase) in such systems. When
defects are added in an ordered fashion, the overall real space topology of the
system can become much more interesting. It is hoped that the studies of these
fundamental symmetries will set the foundations for the creation of quantum
effect computation systems based on macro-molecular objects such as carbon
nanotubes.
Molecular nanotechnology
Molecular nanotechnology, sometimes called molecular
manufacturing, describes engineered nanosystems (nanoscale machines) operating
on the molecular scale. Molecular nanotechnology is especially associated with
the molecular assembler, a machine that can produce a desired structure or
device atom-by-atom using the principles of mechanosynthesis. Manufacturing in
the context of productive nanosystems is not related to, and should be clearly
distinguished from, the conventional technologies used to manufacture
nanomaterials such as carbon nanotubes and nanoparticles.When the term
"nanotechnology" was independently coined and popularized by Eric
Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it
referred to a future manufacturing technology based on molecular machine
systems. The premise was that molecular scale biological analogies of
traditional machine components demonstrated molecular machines were possible:
by the countless examples found in biology, it is known that sophisticated,
stochastically optimised biological machines can be produced.
It is hoped that developments in nanotechnology will make
possible their construction by some other means, perhaps using biomimetic
principles. However, Drexler and other researchers have proposed that advanced
nanotechnology, although perhaps initially implemented by biomimetic means, ultimately
could be based on mechanical engineering principles, namely, a manufacturing
technology based on the mechanical functionality of these components (such as
gears, bearings, motors, and structural members) that would enable
programmable, positional assembly to atomic specification. The physics and
engineering performance of exemplar designs were analyzed in Drexler's book
Nanosystems.In general it is very difficult to assemble devices on the atomic
scale, as all one has to position atoms on other atoms of comparable size and
stickiness. Another view, put forth by Carlo Montemagno, is that future
nanosystems will be hybrids of silicon technology and biological molecular
machines. Yet another view, put forward by the late Richard Smalley, is that
mechanosynthesis is impossible due to the difficulties in mechanically
manipulating individual molecules.
This led to an exchange of letters in the ACS publication
Chemical & Engineering News in 2003.Though biology clearly demonstrates
that molecular machine systems are possible, non-biological molecular machines
are today only in their infancy. Leaders in research on non-biological
molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley
Laboratories and UC Berkeley. They have constructed at least three distinct
molecular devices whose motion is controlled from the desktop with changing
voltage: a nanotube nanomotor, a molecular actuator, and a
nanoelectromechanical relaxation oscillator.
Tools and techniques
There are several important modern developments. The atomic
force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two
early versions of scanning probes that launched nanotechnology. There are other
types of scanning probe microscopy, all flowing from the ideas of the scanning
confocal microscope developed by Marvin Minsky in 1961 and the scanning
acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s,
that made it possible to see structures at the nanoscale. The tip of a scanning
probe can also be used to manipulate nanostructures (a process called
positional assembly). Feature-oriented scanning-positioning methodology
suggested by Rostislav Lapshin appears to be a promising way to implement these
nanomanipulations in automatic mode. However, this is still a slow process
because of low scanning velocity of the microscope. Various techniques of
nanolithography such as optical lithography, X-ray lithography dip pen
nanolithography, electron beam lithography or nanoimprint lithography were also
developed. Lithography is a top-down fabrication technique where a bulk
material is reduced in size to nanoscale pattern.
Another group of nanotechnological techniques include those
used for fabrication of nanotubes and nanowires, those used in semiconductor
fabrication such as deep ultraviolet lithography, electron beam lithography,
focused ion beam machining, nanoimprint lithography, atomic layer deposition,
and molecular vapor deposition, and further including molecular self-assembly
techniques such as those employing di-block copolymers. However, all of these
techniques preceded the nanotech era, and are extensions in the development of
scientific advancements rather than techniques which were devised with the sole
purpose of creating nanotechnology and which were results of nanotechnology
research.The top-down approach anticipates nanodevices that must be built piece
by piece in stages, much as manufactured items are made. Scanning probe
microscopy is an important technique both for characterization and synthesis of
nanomaterials. Atomic force microscopes and scanning tunneling microscopes can
be used to look at surfaces and to move atoms around. By designing different
tips for these microscopes, they can be used for carving out structures on surfaces
and to help guide self-assembling structures. By using, for example,
feature-oriented scanning-positioning approach, atoms can be moved around on a
surface with scanning probe microscopy techniques. At present, it is expensive
and time-consuming for mass production but very suitable for laboratory
experimentation.
In contrast, bottom-up techniques build or grow larger
structures atom by atom or molecule by molecule. These techniques include
chemical synthesis, self-assembly and positional assembly. Dual polarisation
interferometry is one tool suitable for characterisation of self-assembled thin
films. Another variation of the bottom-up approach is molecular beam epitaxy or
MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho
and Art C. Gossard developed and implemented MBE as a research tool in the late
1960s and 1970s. Samples made by MBE were key to the discovery of the
fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was
awarded. MBE allows scientists to lay down atomically precise layers of atoms
and, in the process, build up complex structures. Important for research on
semiconductors, MBE is also widely used to make samples and devices for the
newly emerging field of spintronics.
Dragonball evolution is the American version of popular Japanese animated series Dragon ball. It's released in 2009.
Story
It was directed, produced and written by James Wong, and
released by 20th Century Fox. The story centers on the adventures of the lead
character, Goku, around his 18th birthday, as he is asked to gather seven
Dragon Balls to save the world from evil alien forces. On his journey, he meets
several different characters who all join the quest and help him in his task.
The film stars Justin Chatwin as Son Goku, Emmy Rossum as Bulma Briefs, James
Marsters as Lord Piccolo, Jamie Chung as Chi-Chi, Chow Yun-fat as Master Roshi,
Joon Park as Yamcha and Eriko Tamura as Mai. It was released in Japan and
several other Asian nations on March 13, 2009, and in the United States on
April 10, 2009.
Two thousand years ago, a demon named Piccolo (James
Marsters) descended upon Earth wreaking havoc with his minion Ozaru. Seven
mystics created the Mafuba and sealed him away for what they thought was for
good. However, Piccolo breaks free and with his beautiful follower Mai (Eriko
Tamura), proceeds to find the Dragonballs and kill anyone in the way. On his
18th birthday, a young high-school student and martial artist named Goku
(Justin Chatwin) is given the 4-Star Dragonball by his grandfather, Grandpa
Gohan (Randall Duk Kim). After returning home from a party hosted by his crush
Chi-Chi (Jamie Chung), Goku finds his home obliterated and his grandfather near
death in the aftermath of Piccolo's failed attempt to acquire the Dragonball.
Before he dies, Gohan tells Goku to seek out the martial arts master, Muten
Roshi (Chow Yun-fat), who holds another one of the Dragonballs. Along the way,
Goku meets Bulma Briefs (Emmy Rossum) of the Capsule Corporation, who was
studying the 5-Star Dragonball until it was stolen by Mai. Goku offers Bulma
his protection in exchange for her help in finding Roshi and they ultimately
find him in Paozu City. Under Roshi's wing, Goku begins training to harness his
Ki, now knowing that they must acquire all the Dragonballs before the upcoming
solar eclipse, when Ōzaru will return and join with Piccolo. In the midst of
the group's search for the 6-star Dragonball, they fall into a trap set by the
desert bandit Yamcha (Joon Park) but Roshi convinces Yamcha to join them.
Together, the group fights their way through an ambush by Mai and successfully
obtain the next Dragonball. As the group continues their quest, they travel to
a temple where Roshi consults his former teacher Sifu Norris (Ernie Hudson) and
begins preparing the Mafuba enchantment so he can reseal Piccolo, while Goku
must learn the most powerful of the ki-bending techniques: the Kamehameha.During
the night, Mai - disguised as Chi-Chi - steals the three Dragonballs that Goku
and the others have acquired, adding them to the other four that Piccolo has
gathered. With the Dragonballs successfully united, Piccolo begins to summon
Shen Long, but is stopped by the timely arrival of Goku's team. During the
battle that ensues, consisting of a Ki blast battle and some punches, Piccolo
reveals to Goku that he is Ōzaru, having been sent to Earth as an infant to
destroy it when he came of age. As the eclipse begins, Goku transforms into Ozaru
and terrorizes Bulma and Yamcha, while Roshi attempts to use the Mafuba, but
weakens because of not having enough energy to live before he can re-seal
Piccolo. Roshi's dying words restore Goku to his senses as he is choked to
death by Ozaru, and he engages Piccolo in a final battle, seemingly defeating
him with the power of the Kamehameha. Goku then uses the Dragonballs to summon
Shen Long, and request that he restore Roshi to life. As they celebrate, they
realize the Dragonballs have now scattered, and Bulma declares that they must
seek the balls again. Before they head out, Goku visits Chi-Chi so they can
truly begin their relationship, but first, they engage in a sparring match to
see which of them is stronger.
Piccolo was revealed to be alive at the end.
Development
In March 2002, 20th Century Fox acquired feature film rights
to the Dragon Ball franchise. In June 2004, Ben Ramsey, who wrote The Big Hit,
was paid $500,000 to adapt Dragonball Z. 20th Century Fox approached Stephen
Chow to direct the film, and although he said he was deeply interested because
he is a fan of Dragon Ball, Chow declined the chance to direct. He, however,
accepted a role as producer via his company Star Overseas. 20th Century Fox
then went on to send the script to writer/director James Wong who accepted. In
2007, James Wong and Stephen Chow were announced as director and producer
respectively, and the project was retitled Dragon ball. Ben Ramsey's first
draft was deemed too expensive to shoot, and in the end he wrote about five
different drafts of the script following notes from the studio. James Wong
wrote the last draft, again according to notes from the studio.[5] Chow was a
Dragon Ball fan, citing its "airy and unstrained story [which] leaves much
room for creation", but explained he would only serve as producer because
he believes that he should only direct stories he had created. Differing costs
to produce the film have been reported. In January 2008, Marsters spoke to TV
Guide that he was told the film had a budget of approximately $100 million. In
April 2009, the Spanish television station Telecinco reported that the budget
was $50 million. Marsters would later claim that the film in fact was produced
for $30 million.
AVP is a since fiction film of Hollywood. Two movies has been released on AVP series, the first one was in 2004 and the second one was in 2007. It featured as a science-fiction horror film. It based on two popular sci-fi characters Alien and Predator.
AVP (2004)
The film was released on August 13, 2004, in North America
and received mostly negative reviews from film critics. Some praised the special
effects and set designs, while others dismissed the film for its "wooden
dialogue" and "cardboard characters". Nevertheless, Alien vs.
Predator was a commercial success, grossing over $172 million against its $60
million production budget.
In 2004, a satellite detects a mysterious heat bloom beneath
Bouvetøya, an island about one thousand miles north of Antarctica. Wealthy
industrialist Charles Bishop Weyland (Lance Henriksen) assembles a team of
scientists to investigate the heat source and claim it for his multinational
communications company, Weyland Industries. The team includes archaeologists,
linguistic experts, drillers, mercenaries, and a guide named Alexa Woods (Sanaa
Lathan).
As a Predator ship reaches Earth's orbit, it blasts a shaft
through the ice towards the source of the heat bloom. When the humans arrive at
the site above the heat source, an abandoned whaling station, they find the
shaft and descend beneath the ice. They discover a mysterious pyramid and begin
to explore it, finding evidence of a civilization predating written history and
what appears to be a sacrificial chamber filled with human skeletons with
ruptured rib cages.Meanwhile, three Predators land and kill the humans on the
surface, making their way down to the pyramid and arriving just as the team
unwittingly powers up the structure. An Alien queen awakens from cryogenic
stasis and begins to produce eggs, from which facehuggers hatch and attach to
several humans trapped in the sacrificial chamber. Chestbursters emerge from
the humans and quickly grow into adult Aliens. Conflicts erupt between the
Predators, Aliens, and humans, resulting in several deaths. Unbeknownst to the
others, a Predator is implanted with an Alien embryo.Through translation of the
pyramid's hieroglyphs the explorers learn that the Predators have been visiting
Earth for thousands of years. It was they who taught early human civilizations
how to build pyramids, and were worshipped as gods. Every 100 years they would
visit Earth to take part in a rite of passage in which several humans would
sacrifice themselves as hosts for the Aliens, creating the "ultimate
prey" for the Predators to hunt. If overwhelmed, the Predators would
activate their self-destruct weapons to eliminate the Aliens and themselves.
The explorers deduce that this is why the current Predators are at the pyramid,
and that the heat bloom was to attract humans for the purpose of making new
Aliens to hunt.
The remaining humans decide that the Predators must be
allowed to succeed in their hunt so that the Aliens do not reach the surface.
As the battle continues most of the characters are killed, leaving only Alexa
and a single Predator to fight against the Aliens. The two forms an alliance
and use the Predator’s self-destruct device to destroy the pyramid and the
remaining Aliens. Alexa and the Predator reach the surface, where they battle
the escaped Alien queen. They defeat the queen by attaching its chain to a
water tower and pushing it over a cliff into the water, dragging the queen to
the ocean floor. The Predator, however, dies from its wounds.
AVP (2007)
It's the second film that continues the story of the first AVP movie.
Aliens vs. Predator: Requiem was released on December 25,
2007 and received a largely negative response from film critics. The film
grossed $9.5 million on its opening day and took in a worldwide gross of $128.9
million in theaters. According to Home Media Magazine, the film debuted at #1
in sales and rentals on Blu-ray and on DVD when it was released on home video
on April 15, 2008. Since then, the film has gained $28,550,434 in home video
sales, bringing its total film gross to $157,461,400.
The story begins with a Predator spaceship which is leaving
Earth carrying dead Aliens, living face-huggers, and the body of the Predator
that defeated the Alien queen. A chestburster erupts from the dead Predator's
body; it is a new creature that is a hybrid of Alien and Predator
characteristics. It quickly matures into an adult and begins killing Predators
throughout the ship. A Predator's weapons fire punctures the hull and the ship
crashes in the forest outside of Gunnison, Colorado.With the Predators dead,
the hybrid and several face-huggers escape, implanting embryos into a nearby
father and son and into several homeless people living in the sewers. A
distress signal from the wrecked ship reaches the Predator home world and a
lone Predator responds, traveling to Earth and using its advanced technology to
observe the cause of the crash and to track the face-huggers. It begins to
erase the evidence of the Aliens' presence by destroying the crashed ship and
using a blue liquid to dissolve the bodies of the face-huggers and their
victims.
Meanwhile, ex-convict Dallas Howard (Steven Pasquale) has
just returned to Gunnison after serving time in prison. He is greeted by
Sheriff Eddie Morales (John Ortiz) and reunites with his younger brother Ricky
(Johnny Lewis). Ricky has a romantic interest in his more affluent classmate
Jesse (Kristen Hager) and is being harassed by her boyfriend Dale (David
Paetkau) and two of his friends. Kelly O'Brien (Reiko Aylesworth) has also just
returned to Gunnison after service in the military, and reunites with her
husband Tim (Sam Trammell) and daughter Molly (Ariel Gade).The Predator fights
a number of Aliens in the sewers and as the battle reaches the surface several
of them disperse into the town. The Predator pursues some to the power plant,
where collateral damage from its weaponry causes a city-wide power outage.
Ricky and Jesse meet at the high school swimming pool but are interrupted by
Dale and his cohorts just as the power fails and an Alien enters the building,
killing Dale's friends. Another Alien invades the O'Brien home, killing Tim
while Kelly escapes with Molly.
Kelly, Molly, Ricky, Jesse, Dale, Dallas, and Sheriff
Morales meet at a sporting goods store to gather weapons. Troops from the
Colorado Army National Guard arrive but are quickly killed by the Aliens. When
the battle between the Predator and the Aliens enters the store, Dale is killed
and the Predator's shoulder cannons are damaged; it is able to modify one into
a hand-held blaster. As the survivors attempt to escape Gunnison they make
radio contact with Colonel Stevens (Robert Joy), who indicates that an air
evacuation is being staged at the center of town. Kelly is suspicious of the
military's intentions, convincing a small group to go to the hospital where
they hope to escape by helicopter, while Sheriff Morales heads to the
evacuation area with the rest of the surviving citizens. The hospital, however,
has been invaded by Aliens and the hybrid creature. The Predator soon arrives
and in the ensuing battle Jesse is killed, Ricky is injured, and Dallas takes
possession of the Predator's blaster cannon.
As the battle reaches the rooftop, Dallas, Ricky, Kelly, and
Molly escape in the helicopter while the Predator battles the hybrid
hand-to-hand. The two creatures mortally wound each other just as a military
jet arrives; rather than a rescue airlift it is a bomber, executing a tactical
nuclear strike that destroys the entire city and kills all of the
extraterrestrials along with the remaining citizens. The shock wave causes the
fleeing helicopter to crash in a clearing, where the survivors are rescued by
the military. The Predator's blaster cannon is confiscated by Colonel Stevens
and presented to a Ms. Yutani.
AVP 3
This movie is still a rumor today. There is no sequel or official announcement released yet on this film. Now it is continuing with fan series.
Fan story - From the
beginning, fans have wanted an Alien vs Predator film to take place in space
far in the future. The Strauses had a few of their own ideas for a potential
storyline. One of the ideas took place after Aliens and another one involved
the space jockey creature seen in the original Alien. Of course, a movie set in
space is costly and it’s obvious the studio doesn’t want to take the risk. If
AvP3 does ever get the greenlight we could end up with another poor Earth-based
setting. Shane Salerno also pitched a story to Fox where the Predator ship
crashed in Afghanistan and the movie revolved around a Special Forces team.
In astronomy and cosmology, dark matter is matter that
neither emits nor scatters light or other electromagnetic radiation, and so
cannot be directly detected via optical or radio astronomy. Its existence is
inferred from gravitational effects on visible matter and gravitational lensing
of background radiation, and was originally hypothesized to account for
discrepancies between calculations of the mass of galaxies, clusters of
galaxies and the entire universe made through dynamical and general relativistic
means, and calculations based on the mass of the visible "luminous"
matter these objects contain: stars and the gas and dust of the interstellar
and intergalactic medium. Many experiments to detect dark matter through
non-gravitational means are underway.
According to observations of structures larger than solar
systems, as well as Big Bang cosmology interpreted under the Friedmann
equations and the FLRW metric, dark matter accounts for 23% of the mass-energy
density of the observable universe. In comparison, ordinary matter accounts for
only 4.6% of the mass-energy density of the observable universe, with the
remainder being attributable to dark energy. From these figures, dark matter
constitutes 83%, (23/(23+4.6)), of the matter in the universe, whereas ordinary
matter makes up only 17%.
Dark matter was postulated by Fritz Zwicky in 1934 to
account for evidence of "missing mass" in the orbital velocities of
galaxies in clusters. Subsequently, other observations have indicated the
presence of dark matter in the universe; these observations include the
rotational speeds of galaxies, gravitational lensing of background objects by
galaxy clusters such as the Bullet Cluster, and the temperature distribution of
hot gas in galaxies and clusters of galaxies.
Dark matter plays a central role in state-of-the-art
modeling of structure formation and galaxy evolution, and has measurable
effects on the anisotropies observed in the cosmic microwave background. All
these lines of evidence suggest that galaxies, clusters of galaxies, and the
universe as a whole contain far more matter than that which interacts with
electromagnetic radiation. The largest part of dark matter, which does not
interact with electromagnetic radiation, is not only "dark" but also,
by definition, utterly transparent.
As important as dark matter is believed to be in the cosmos,
direct evidence of its existence and a concrete understanding of its nature
have remained elusive. Though the theory of dark matter remains the most widely
accepted theory to explain the anomalies in observed galactic rotation, some
alternative theoretical approaches have been developed which broadly fall into
the categories of modified gravitational laws, and quantum gravitational laws.
Another
explanation for how space acquires energy comes from the quantum theory of
matter. In this theory, "empty space" is actually full of temporary
("virtual") particles that continually form and then disappear. But
when physicists tried to calculate how much energy this would give empty space,
the answer came out wrong - wrong by a lot. The number came out 10^120 times too
big. That's a 1 with 120 zeros after it. It's hard to get an answer that bad.
So the mystery continues.
Another
explanation for dark energy is that it is a new kind of dynamical energy fluid
or field, something that fills all of space but something whose effect on the
expansion of the Universe is the opposite of that of matter and normal energy.
Some theorists have named this "quintessence," after the fifth
element of the Greek philosophers. But, if quintessence is the answer, we still
don't know what it is like, what it interacts with, or why it exists. So the
mystery continues.
A last
possibility is that Einstein's theory of gravity is not correct. That would not
only affect the expansion of the Universe, but it would also affect the way
that normal matter in galaxies and clusters of galaxies behaved. This fact
would provide a way to decide if the solution to the dark energy problem is a
new gravity theory or not: we could observe how galaxies come together in
clusters. But if it does turn out that a new theory of gravity is needed, what
kind of theory would it be? How could it correctly describe the motion of the
bodies in the Solar System, as Einstein's theory is known to do, and still give
us the different prediction for the Universe that we need? There are candidate
theories, but none are compelling. So the mystery continues.
The thing
that is needed to decide between dark energy possibilities - a property of
space, a new dynamic fluid, or a new theory of gravity - is more data, better
data.
So far,
though, scientists don’t know what dark energy is. It could spring from the
vacuum of space itself, becoming a more dominant force as the universe expands
and gets more spacious. Dark energy could be exotic new particles or other
undiscovered physics. Dark energy could mean that our understanding of gravity
needs an overhaul. Or it could be something completely different — perhaps
something that no one has even thought about. It could require scientists to
revise their ideas about the Big Bang, or even develop an entirely new scenario
to explain how the universe was born.
Learning
about dark energy is far more difficult than sticking your toe in the ocean or
toting a bucket of water back to the laboratory, though. Trying to find
something that you didn’t even know existed until a few years ago will require
scientists to devise clever ways of probing the universe and the history of its
birth and evolution, and engineers to design new tools to study them.
In the
coming years and decades, astronomers will study exploding stars, map millions
of galaxies, and plot the gravitational influence of dense galaxy clusters.
Particle physicists will probe conditions near the time of the Big Bang. And
all of them will tweak their models of how the universe began, how it has aged,
and how it will end.
Their work
will help us understand the vast cosmic "ocean" of dark energy — an
ocean that we are just beginning to explore.
Baryonic and
Nonbaryonic dark matter
A small proportion of dark matter may be baryonic dark
matter: astronomical bodies, such as massive compact halo objects, that are
composed of ordinary matter but which emit little or no electromagnetic
radiation. The vast majority of dark matter in the universe is believed to be
nonbaryonic, and thus not formed out of atoms. It is also believed that it does
not interact with ordinary matter via electromagnetic forces; in particular,
dark matter particles do not carry any electric charge. The nonbaryonic dark
matter includes neutrinos, and possibly hypothetical entities such as axions,
or super symmetric particles. Unlike baryonic dark matter, nonbaryonic dark
matter does not contribute to the formation of the elements in the early
universe ("Big Bang nucleosynthesis") and so its presence is revealed
only via its gravitational attraction. In addition, if the particles of which
it is composed are supersymmetric, they can undergo annihilation interactions
with themselves resulting in observable by-products such as photons and neutrinos
("indirect detection").
Nonbaryonic dark matter is classified in terms of the mass
of the particle(s) that is assumed to make it up, and/or the typical velocity
dispersion of those particles (since more massive particles move more slowly).
There are three prominent hypotheses on nonbaryonic dark matter, called Hot
Dark Matter (HDM), Warm Dark Matter (WDM), and Cold Dark Matter (CDM); some
combination of these is also possible. The most widely discussed models for
nonbaryonic dark matter are based on the Cold Dark Matter hypothesis, and the
corresponding particle is most commonly assumed to be a neutralino. Hot dark
matter might consist of (massive) neutrinos. Cold dark matter would lead to a
"bottom-up" formation of structure in the universe while hot dark
matter would result in a "top-down" formation scenario.
One possibility is that cold dark matter could consist of
primordial black holes in the range of 1014 kg to 1023 kg.[8] Being within the
range of an asteroid's mass, they would be small enough to pass through objects
like stars, with minimal impact on the star itself. These black holes may have
formed shortly after the big bang when the energy density was great enough to
form black holes directly from density variations, instead of from star
collapse. In vast numbers they could account for the missing mass necessary to
explain star motions in galaxies and gravitational lensing effects.
Observational
evidence
The first
person to provide evidence and infer the presence of dark matter was Swiss
astrophysicist Fritz Zwicky, of the California Institute of Technology in 1933.
He applied the virial theorem to the Coma cluster of galaxies and obtained
evidence of unseen mass. Zwicky estimated the cluster's total mass based on the
motions of galaxies near its edge and compared that estimate to one based on
the number of galaxies and total brightness of the cluster. He found that there
was about 400 times more estimated mass than was visually observable. The
gravity of the visible galaxies in the cluster would be far too small for such
fast orbits, so something extra was required. This is known as the
"missing mass problem". Based on these conclusions, Zwicky inferred
that there must be some non-visible form of matter which would provide enough
of the mass and gravity to hold the cluster together.
Much of the
evidence for dark matter comes from the study of the motions of galaxies. Many
of these appear to be fairly uniform, so by the virial theorem the total
kinetic energy should be half the total gravitational binding energy of the
galaxies. Experimentally, however, the total kinetic energy is found to be much
greater: in particular, assuming the gravitational mass is due to only the visible
matter of the galaxy; stars far from the center of galaxies have much higher
velocities than predicted by the virial theorem. Galactic rotation curves,
which illustrate the velocity of rotation versus the distance from the galactic
center, cannot be explained by only the visible matter. Assuming that the
visible material makes up only a small part of the cluster is the most
straightforward way of accounting for this. Galaxies show signs of being
composed largely of a roughly spherically symmetric, centrally concentrated
halo of dark matter with the visible matter concentrated in a disc at the
center. Low surface brightness dwarf galaxies are important sources of
information for studying dark matter, as they have an uncommonly low ratio of
visible matter to dark matter, and have few bright stars at the center which
would otherwise impair observations of the rotation curve of outlying stars.
Gravitational
lensing observations of galaxy clusters allow direct estimates of the
gravitational mass based on its effect on light from background galaxies, since
large collections of matter (dark or otherwise) will gravitationally deflect
light. In clusters such as Abell 1689, lensing observations confirm the
presence of considerably more mass than is indicated by the clusters' light
alone. In the Bullet Cluster, lensing observations show that much of the
lensing mass is separated from the X-ray-emitting baryonic mass.
Galactic
rotation curves
Rotation
curve of a typical spiral galaxy: predicted (A) and observed (B). Dark matter
can explain the velocity curve having a 'flat' appearance out to a large radius
For 40 years
after Zwicky's initial observations, no other corroborating observations
indicated that the mass to light ratio was anything other than unity. Then, in
the late 1960s and early 1970s, Vera Rubin, a young astronomer at the
Department of Terrestrial Magnetism at the Carnegie Institution of Washington
presented findings based on a new sensitive spectrograph that could measure the
velocity curve of edge-on spiral galaxies to a greater degree of accuracy than
had ever before been achieved. Together with fellow staff-member Kent Ford,
Rubin announced at a 1975 meeting of the American Astronomical Society the
discovery that most stars in spiral galaxies orbit at roughly the same speed,
which implied that their mass densities were uniform well beyond the locations
with most of the stars (the galactic bulge). An influential paper presented
these results in 1980. These results suggest that either Newtonian gravity does
not apply universally or that, conservatively, upwards of 50% of the mass of
galaxies was contained in the relatively dark galactic halo. Met with
skepticism, Rubin insisted that the observations were correct. Eventually other
astronomers began to corroborate her work and it soon became well-established
that most galaxies were in fact dominated by "dark matter":
Low Surface Brightness
(LSB) galaxies. LSBs are probably everywhere dark matter-dominated, with the
observed stellar populations making only a small contribution to rotation
curves. Such a property is extremely important because it allows one to avoid
the difficulties associated with the deprojection and disentanglement of the
dark and visible contributions to the rotation curves.[7]
Spiral
Galaxies. Rotation curves of both low and high surface luminosity galaxies
appear to suggest a universal density profile, which can be expressed as the
sum of an exponential thin stellar disk, and a spherical dark matter halo with
a flat core of radius r0 and density ρ0 = 4.5 × 10−2(r0/kpc)−2/3 M⊙pc−3 (here, M⊙ denotes a solar mass, 2 × 1030 kg).
Elliptical
galaxies. Some elliptical galaxies show evidence for dark matter via strong
gravitational lensing,[15] X-ray evidence reveals the presence of extended
atmospheres of hot gas that fill the dark haloes of isolated ellipticals and
whose hydrostatic support provides evidence for dark matter. Other ellipticals
have low velocities in their outskirts (tracked for example by planetary
nebulae) and were interpreted as not having dark matter haloes. However
simulations of disk-galaxy mergers indicate that stars were torn by tidal
forces from their original galaxies during the first close passage and put on
outgoing trajectories, explaining the low velocities even with a DM halo.[16]
More research is needed to clarify this situation.
Note that
simulated DM haloes have significantly steeper density profiles (having central
cusps) than are inferred from observations, which is a problem for cosmological
models with dark matter at the smallest scale of galaxies as of 2008. This may
only be a problem of resolution: star-forming regions which might alter the
dark matter distribution via outflows of gas have been too small to resolve and
model simultaneously with larger dark matter clumps. A recent simulation of a
dwarf galaxy resolving these star-forming regions reported that strong outflows
from supernovae remove low-angular-momentum gas, which inhibits the formation
of a galactic bulge and decreases the dark matter density to less than half of
what it would have been in the central kiloparsec. These simulation
predictions—bulgeless and with shallow central dark matter profiles—correspond
closely to observations of actual dwarf galaxies. There are no such
discrepancies at the larger scales of clusters of galaxies and above, or in the
outer regions of haloes of galaxies.
Exceptions
to this general picture of DM haloes for galaxies appear to be galaxies with
mass-to-light ratios close to that of stars.[citation needed] Subsequent to
this, numerous observations have been made that do indicate the presence of
dark matter in various parts of the cosmos.[citation needed] Together with
Rubin's findings for spiral galaxies and Zwicky's work on galaxy clusters, the
observational evidence for dark matter has been collecting over the decades to
the point that today most astrophysicists accept its existence. As a unifying
concept, dark matter is one of the dominant features considered in the analysis
of structures on the order of galactic scale and larger.
The processof Semiconductor
device fabrication used to create the integrated circuits that are
present in everyday electrical and electronic devices. It is a multiple-step
sequence of photographic and chemical processing steps during which electronic
circuits are gradually created on a wafer made of pure semi-conducting material.
Silicon is almost always used, but various compound semiconductors are used for
specialized applications.
The entire manufacturing process, from start to packaged chips ready for
shipment, takes six to eight weeks and is performed in highly specialized
facilities referred to as fabs.
The leading semiconductor manufacturers typically have
facilities all over the world. Intel, the world's largest manufacturer, has
facilities in Europe and Asia as well as the U.S. Other top manufacturers
include
Taiwan Semiconductor Manufacturing Company (Taiwan),
STMicroelectronics
(Europe), Analog Devices (US),
Integrated Device Technology (US),
Atmel (US/Europe),
Freescale Semiconductor (US),
Samsung (Korea),
Texas Instruments (US),
GlobalFoundries (Germany, Singapore, future
New York fab in construction),
Toshiba
(Japan), NEC Electronics (Japan), Infineon
(Europe),
Renesas (Japan),
Fujitsu (Japan/US),
NXP Semiconductors (Europe and US),
Micron Technology (US),
Hynix (Korea) and SMIC (China)
PROCESSING
Deposition-- is any
process that grows, coats, or otherwise transfers a material onto the
wafer. Available technologies consist of physical vapor deposition (PVD), chemical
vapor deposition (CVD), electrochemical deposition (ECD), molecular beam
epitaxy (MBE) and more recently, atomic layer deposition (ALD) among
others.
Removal processes-- are
any that remove material from the wafer either in bulk or selectively and
consist primarily of etch processes, either wet etching or dry etching. Chemical-mechanical
planarization (CMP) is also a removal process used between levels.
Patterning-- covers the
series of processes that shape or alter the existing shape of the
deposited materials and is generally referred to as lithography. For
example, in conventional lithography, the wafer is coated with a chemical
called a photoresist. The photoresist is exposed by a stepper, a machine that
focuses, aligns, and moves the mask, exposing select portions of the wafer
to short wavelength light. The unexposed regions are washed away by a
developer solution. After etching or other processing, the remaining
photoresist is removed by plasma ashing.
Modification of electrical
properties-- has historically consisted of doping transistor sources and
drains originally by diffusion furnaces and later by ion implantation.
These doping processes are followed by furnace anneal or in advanced
devices, by rapid thermal anneal (RTA) which serve to activate the implanted
dopants. Modification of electrical properties now also extends to
reduction of dielectric constant in low-k insulating materials via
exposure to ultraviolet light in UV processing (UVP).
Modern chips have up to eleven metal levels produced in over 300 sequenced
processing steps.
Front-end-of-line (FEOL) processing
FEOL processing refers to the formation of the transistors directly in the silicon.
The raw wafer is engineered by the growth of an ultrapure, virtually
defect-free silicon layer through epitaxy. In the most advanced logic devices, prior
to the silicon epitaxy step, tricks are performed to improve the performance of
the transistors to be built. One method involves introducing a straining step wherein a silicon
variant such as silicon-germanium (SiGe) is deposited. Once the epitaxial
silicon is deposited, the crystal lattice becomes stretched somewhat, resulting
in improved electronic mobility. Another method, called silicon on insulator technology involves the insertion of an
insulating layer between the raw silicon wafer and the thin layer of subsequent
silicon epitaxy. This method results in the creation of transistors with
reduced parasitic effects.
Gate oxide and
implants
Front-end surface engineering is followed by: growth of the gate dielectric,
traditionally silicon dioxide (SiO2), patterning of the gate,
patterning of the source and drain regions, and subsequent implantation or
diffusion of dopants to obtain the desired complementary electrical properties.
In dynamic random access memory (DRAM) devices, storage capacitors are also
fabricated at this time, typically stacked above the access transistor
(implementing them as trenches etched deep into the silicon surface was a
technique developed by the now defunct DRAM manufacturer Qimonda).
BEOL Processing (Back-End-Of-Line)
Metal layers
Once the various semiconductor devices have been created, they must be
interconnected to form the desired electrical circuits. This occurs in a series
of wafer processing steps collectively referred to as BEOL (not to be confused
with back end of chip
fabrication which refers to the packaging and testing stages). BEOL processing
involves creating metal interconnecting wires that are isolated by dielectric
layers. The insulating material was traditionally a form of SiO2 or
a silicate glass, but recently new low dielectric constant materials are being
used. These dielectrics presently take the form of SiOC and have dielectric
constants around 2.7 (compared to 3.9 for SiO2), although materials
with constants as low as 2.2 are being offered to chipmakers.
Interconnect
Historically, the metal wires consisted of aluminium. In this approach to
wiring often called subtractive
aluminium, blanket films of aluminium are deposited first, patterned,
and then etched, leaving isolated wires. Dielectric material is then deposited
over the exposed wires. The various metal layers are interconnected by etching
holes, called vias, in the
insulating material and depositing tungsten in them with a CVD technique. This
approach is still used in the fabrication of many memory chips such as dynamic
random access memory (DRAM) as the number of interconnect levels is small,
currently no more than four.
More recently, as the number of interconnect levels for logic has
substantially increased due to the large number of transistors that are now
interconnected in a modern microprocessor, the timing delay in the wiring has
become significant prompting a change in wiring material from aluminium to copper
and from the silicon dioxides to newer low-K material. This performance
enhancement also comes at a reduced cost via damascene processing that
eliminates processing steps. As the number of interconnect levels increases,
planarization of the previous layers is required to ensure a flat surface prior
to subsequent lithography. Without it, the levels would become increasingly
crooked and extend outside the depth of focus of available lithography,
interfering with the ability to pattern. CMP (chemical mechanical
planarization) is the primary processing method to achieve such planarization
although dry etch back is still
sometimes employed if the number of interconnect levels is no more than three.
Wafer test
The highly serialized nature of wafer processing has increased the demand
for metrology in between the various processing steps. Wafer test metrology
equipment is used to verify that the wafers haven't been damaged by previous
processing steps up until testing. If the number of dies—the integrated
circuits that will eventually become chips— etched on a wafer exceeds a failure
threshold (i.e. too many failed dies on one wafer), the wafer is scrapped
rather than investing in further processing.
Device test
Once the front-end process has been completed, the semiconductor devices are
subjected to a variety of electrical tests to determine if they function
properly. The proportion of devices on the wafer found to perform properly is
referred to as the yield.
The fab tests the chips on the wafer with an electronic tester that presses
tiny probes against the chip. The machine marks each bad chip with a drop of
dye. Currently, electronic dye marking is possible if wafer test data is logged
into a central computer database and chips are "binned" (i.e. sorted
into virtual bins) according to predetermined test limits. The resulting
binning data can be graphed, or logged, on a wafer map to trace manufacturing
defects and mark bad chips. This map can be also used during wafer assembly and
packaging.
Chips are also tested again after packaging, as the bond wires may be missing,
or analog performance may be altered by the package. This is referred to as
"final test".
Usually, the fab charges for test time, with prices in the order of cents
per second. Test times vary from a few milliseconds to a couple of seconds, and
the test software is optimized for reduced test time. Multiple chip
(multi-site) testing is also possible, since many testers have the resources to
perform most or all of the tests in parallel.
Chips are often designed with "testability features" such as scan
chains and "built-in self-test" to speed testing, and reduce test
costs. In certain designs that use specialized analog fab processes, wafers are
also laser-trimmed during test, to achieve tightly-distributed resistance
values as specified by the design.
Good designs try to test and statistically manage corners: extremes of silicon behavior caused by operating
temperature combined with the extremes of fab processing steps. Most designs
cope with more than 64 corners.
Wafer backgrinding
ICs are being produced on semiconductor wafers that undergo a multitude of
processing steps. The silicon wafers predominantly being used today have
diameters of 20 and 30 cm. They are roughly 750 μm thick to ensure a minimum of
mechanical stability and to avoid warping during high-temperature processing
steps.
Smartcards, USB memory sticks, smartphones, handheld music players, and
other ultra compact electronic products would not be feasible in their present
form without minimizing the size of their various components along all
dimensions. The backside of the wafers is thus ground prior to wafer dicing
(where the individual microchips are being singulated). Wafers thinned down to
75 to 50 μm are common today.
The process is also known as 'Backlap' or 'Wafer thinning
Wafer mounting
Wafer mounting is a step that is performed during the die preparation of a wafer
as part of the process of semiconductor fabrication. During this step, the
wafer is mounted on a plastic tape that is attached to a ring. Wafer mounting
is performed right before the wafer is cut into separate dies. The adhesive
film on which the wafer is mounted ensures that the individual dies remain
firmly in place during 'dicing', as the process of cutting the wafer is called.
The picture on the right shows a 300 mm wafer after it was mounted and
diced. The blue plastic is the adhesive tape. The wafer is the round disc in
the middle. In this case, a large number of dies were already removed.
Semiconductor-die cutting
In the manufacturing of micro-electronic devices, die cutting, dicing
or singulation is a process of
reducing a wafer containing multiple identical integrated circuits to
individual dies each containing one of those circuits.
During this process, a wafer with up to thousands of circuits is cut into rectangular
pieces, each called a die. In between those functional parts of the circuits, a
thin non-functional spacing is foreseen where a saw can safely cut the wafer
without damaging the circuits. This spacing is called scribe line or saw
street. The width of the scribe is very small, typically around 100 μm. A
very thin and accurate saw is therefore needed to cut the wafer into pieces.
Usually the dicing is performed with a water-cooled circular saw with
diamond-tipped teeth.
Types of blades
The most common make up of blade used is either a metal or resin bond
containing abrasive grit of natural or more commonly synthetic diamond, or
borazon in various forms. Alternatively, the bond and grit may be applied as a
coating to a metal former
Packaging
Plastic or ceramic packaging involves mounting the die, connecting the die
pads to the pins on the package, and sealing the die. Tiny wires are used to
connect pads to the pins. In the old days, wires were attached by hand, but now
purpose-built machines perform the task. Traditionally, the wires to the chips
were gold, leading to a "lead frame" (pronounced "leed
frame") of copper, that had been plated with solder, a mixture of tin and
lead. Lead is poisonous, so lead-free "lead frames" are now mandated
by ROHS.
Chip-scale package (CSP) is another packaging technology. A plastic dual
in-line package, like most packages, is many times larger than the actual die
hidden inside, whereas CSP chips are nearly the size of the die. CSP can be
constructed for each die before
the wafer is diced.
The packaged chips are retested to ensure that they were not damaged during
packaging and that the die-to-pin interconnects operation was performed
correctly. A laser etches the chip's name and numbers on the package.
The
Philadelphia experiment is one f the greatest experiment ever built and the name given by conspiracy theorists to an alleged
naval military experiment which was supposedly carried out at the Philadelphia
Naval Shipyard in Philadelphia,
Pennsylvania, USA,
sometime around October 28, 1943. It is alleged that the U.S. Navy destroyer
escort USS Eldridge was to be rendered invisible (or "cloaked")
to enemy devices. The experiment is also referred to as Project Rainbow.The
story is widely regarded as a hoax. The U.S. Navy maintains that no such
experiment occurred, and details of the story contradict well-established facts
about the Eldridge, as well as the known laws of physics. The story has
captured imaginations of people in conspiracy theory circles, and they repeat
elements of the Philadelphia Experiment in other government conspiracy theories
The experiment was allegedly based on an aspect of the unified field theory,
a term coined by Albert Einstein. The Unified Field Theory aims to describe
mathematically and physically the interrelated nature of the forces that
comprise electromagnetic radiation and gravity, although to date, no single
theory has successfully expressed these relationships in viable mathematical or
physical terms.According to the accounts, researchers thought that some version
of this Unified Field Theory would enable the Navy to use large electrical
generators to bend light around an object so that it became completely
invisible. The Navy would have regarded this as being of obvious military
value, and according to the accounts, it sponsored the experiment.
Another version of the story proposes that researchers were preparing
magnetic and gravitational measurements of the seafloor to detect anomalies,
supposedly based on Einstein's attempts to understand gravity. In this version
there were also related secret experiments in Nazi Germany to find antigravity,
allegedly led by SS-Obergruppenführer Hans Kammler.In most accounts of the
experiment, the destroyer escort USS Eldridge was fitted with the required
equipment at the Philadelphia Naval Yard. Testing began in the summer of 1943,
and it was supposedly successful to a limited degree. One test, on July 22,
1943, resulted in the Eldridge being rendered almost completely invisible, with
some witnesses reporting a "greenish fog" appearing in its place.
Crew members supposedly complained of severe nausea afterwards. Also, it is
said that when the ship reappeared, some sailors were embedded in the metal
structures of the ship, including one sailor who ended up on a deck level below
that where he began, and had his hand embedded in the steel hull of the ship.
At that point, it is said that the experiment was altered at the request of the
Navy, with the new objective being solely to render the Eldridge invisible to
radar. None of these allegations have been independently substantiated.
The conjecture then alleges that the equipment was not properly re-calibrated,
but in spite of this, the experiment was repeated on October 28, 1943. This
time, the Eldridge not only became invisible, but she physically vanished from
the area in a flash of blue light and teleported to Norfolk, Virginia,
over 200 miles (320 km) away. It could have teleported virtually anywhere
but conveniently landed in the ocean. It is claimed that the Eldridge sat for
some time in full view of men aboard the ship SS Andrew Furuseth,
whereupon the Eldridge vanished from their sight, and then reappeared in Philadelphia at the site
it had originally occupied. It was also said that the warship traveled back in
time for about 10 seconds.
Many versions of the tale include descriptions of serious side effects for
the crew. Some crew members were said to have been physically fused to
bulkheads, while others suffered from mental disorders, and still others
supposedly simply vanished. It is also claimed that the ship's crew may have
been subjected to brainwashing, in order to maintain the secrecy of the
experiment
The claims of the Philadelphia
experiment contradict the known laws of physics. Magnetic fields cannot bend
light waves according to Maxwell's equations. While Einstein's theory of general
relativity shows that light waves can be bent near the surface of an extremely
massive object, such as the sun or a black hole, current human technology
cannot manipulate the astronomical amounts of matter needed to do this.
No Unified Field Theory exists, although it is a subject of ongoing
research. William Moore claimed in his book on the "Philadelphia
Experiment" that Albert Einstein completed, and destroyed, a theory before
his death. This is not supported by historians and scientists familiar with
Einstein's work. Moore
bases his theory on Carl Allen's letter to Jessup, in which Allen refers to a
conversation between Einstein and Bertrand Russell acknowledging that the
theory had been solved, but that man was not ready for it. Shortly before his
death in 1943, the physicist Nikola Tesla was said to have completed some kind
of a "Unified Field Theory". It was never published.
These claims are completely at odds with modern physics. While it is true
that Einstein attempted to unify gravity with electromagnetism based on classical
physics, his geometric approaches, called classical unified field theories,
ignored the modern developments of Quantum theory and the discovery of the Strong
nuclear force and Weak nuclear force. Most physicists consider his overall
approach to be unsuccessful. Attempts by recent scientists to develop a unified
theory focus on the development of a quantum theory that includes gravitation.
If a unified field theory were discovered, it would not offer a practical
engineering method to bend light waves around a large object like a battleship.While
very limited "invisibility cloaks" have recently been developed using meta-material, these are unrelated to theories linking electromagnetism with
gravity.
Here is a brief history from the official website
*
The story begins in June of 1943, with the Destroyer Escort,
U.S.S. Eldridge, DE-173, being fitted with tons of experimental electronic
equipment. This included, according to
one source, two massive generators of 75 KVA each, mounted where the forward
gun turret would have been, distributing their power through four magnetic
coils mounted on the deck. Three RF
transmitters (2 megawatt CW each, mounted on the deck), three thousand “6L6”
power amplifier tubes (used to drive the field coils of the two generators), special
synchronizing and modulation circuits, and a host of other specialized hardware
were employed to generate massive electromagnetic fields which, when properly
configured, would be able to bend light and radio waves around the ship, thus
making it invisible to enemy observers.
The “experiment,” said to have taken place at the
Philadelphia Navy Yard and also at sea, took place on at least one occasion
while in full view of the Merchant Marine ship S.S. Andrew Furuseth, and other
observation ships. The S.S. Andrew
Furuseth becomes significant because one of its crewmen is the source of most
of the original material making up the PX legend. Carlos Miguele Allende, also
known as (A.K.A.) Carl Michael Allen, wrote a series of strange letters to one
Dr. Morris K. Jessup in the 1950’s in which he described what he claims to have
witnessed: at least one of the several
phases of the Philadelphia Experiment.
At 0900 hours, on July 22nd, 1943, the power to the
generators was turned on, and the massive electromagnetic fields started to
build up. A greenish fog was seen to
slowly envelop the ship, concealing it from view. Then the fog itself is said to have
disappeared, taking the U.S.S. Eldridge with it, leaving only undisturbed water
where the ship had been anchored only moments before.
The elite officers of the U.S. Navy and scientists involved
gazed in awe at their greatest achievement:
the ship and crew were not only radar invisible but invisible to the eye
as well! Everything worked as planned, and
about fifteen minutes later they ordered the men to shut down the generators. The greenish fog slowly reappeared, and the U.S.S.
Eldridge began to re-materialize as the fog subsided, but it was evident to all
that something had gone wrong.
When boarded by personnel from shore, the crew members above
decks were disoriented and nauseous. The
U.S. Navy removed the crew from that original experiment, and shortly afterward,
obtained another crew for a second experiment.
In the end, the U.S. Navy decided that they only wanted to achieve radar
invisibility, and the equipment was altered.
On the 28th of October in 1943, at 17:15, the final test on
the U.S.S. Eldridge was performed. The
electromagnetic field generators were turned on again, and the U.S.S. Eldridge
became nearly invisible. Only a faint
outline of the hull remained visible in the water. Everything was fine for the first few seconds,
and then, in a blinding blue flash, the ship completely vanished. Within seconds it reappeared hundreds of
miles away, in Norfolk, Virginia, and was seen for several minutes. The U.S.S. Eldridge then disappeared from Norfolk as mysteriously
as it had arrived, and reappeared back in Philadelphia Naval Yard. This time most of the sailors were violently
sick. Some of the crew were simply
“missing” never to return. Some of the
crew went crazy. The strangest result of
all of this experiment was that five men were found fused to the metal within
the ship’s structure.
The men that survived were never the same again. Those that lived were discharged as “mentally
unfit” for duty, regardless of their true condition.
So, what had begun as an experiment in electronic camouflage,
ended up as an accidental teleportation of an entire ship and crew, to a
distant location and back again, all in a matter of minutes!
Although the above may seem fantastic, one must remember, that
in the 1940’s the atomic bomb was also being invented.
*
Carlos Miguele Allende was born on May 31, 1925. On July 14, 1942, Allende joined the Marine
Corps and was discharged on May 21, 1943 (Taken from the book titled The
Philadelphia Experiment, pg 99). He, then,
joined the Merchant Marine and was assigned to the S.S. Andrew Furuseth. It was upon this ship that he claimed to see
the U.S.S. Eldridge in action. Allende’s
story was bizarre; he stated that he had witnessed the U.S.S. Eldridge being
transported instantaneously to Norfolk from Philadelphia and back
again in a matter of minutes. Upon
researching the matter further, he learned of extremely odd occurrences
associated with the project and wrote a basic summation of his newly learned
knowledge in a letter to Dr. Morris K. Jessup.
Dr. Jessup was an astronomer and Allende had been in the audience of one
of Dr. Jessup’s lectures. Apparently
having some respect for the man, he decided to entrust Dr. Jessup with his
knowledge. The letters were written
oddly: with capitalization, punctuation,
and underlines located in various places.
The letters were, also, written in several colors. In his letters, Allende revealed horrifying
details of the Philadelphia Experiment to Dr. Jessup. Because Dr. Jessup was something of a
believer in odd phenomenon he did not entirely dismiss the ideas presented to
him. He wrote back to Allende and
requested new information. The return
address upon the letter never existed according to the mail-service, yet, Allende
still received Dr. Jessup’s reply. Allende
responded with more detailed letters but the correspondence eventually
discontinued because Dr. Jessup dismissed it as a hoax. During the time of Dr. Jessup’s and Allende’s
correspondence, Dr. Jessup had just recently published his book titled “The
Case for UFO’s.” After Allende had
written to Dr. Jessup, this book was sent to the U.S. Navy and had hand-written
notes inside the book. The notes were in
the same writing as in the letters sent to Dr. Jessup and eventually Dr. Jessup
was asked by the U.S. Navy to view the notes.
Dr. Jessup recognized the writing immediately, but he was
somewhat astonished, as he had concluded earlier that it was merely a hoax to
trick him. The notes in the book were
more detailed than in the letters and were highly insightful, so Dr. Jessup
eventually believed them and researched the matter. Unfortunately, Dr. Jessup could not find any
new leads. Only one tantalizing clue had
shown up. Two crewmen had been walking
in a park when a haggard looking man approached them. The man told them a fantastic story about an
experiment done in which most of the crew died or suffered terrible side
effects. He said that the government
then claimed the entire crew was insane so that when they came forward, they
would merely be dismissed as a group of crazy people who had merely concocted
some fantastic story. After the
conversation, one crew member was convinced while the other was not. Eventually, the member that had been
convinced contacted Dr. Jessup and told him the story. Although this was a substantial lead, Dr. Jessup
was not getting very far and he found that his reputation in the scientific
community was worsening. Faced with
overwhelming odds, Dr. Jessup eventually committed suicide on April 20, 1959, believing
“another existence of universe being better than this miserable world” (The
Philadelphia Experiment, p. 79). Some
believe that his suicide was actually an assassination by government agencies
to keep the experiment quiet. Unfortunately
for Dr. Jessup, a major clue in the puzzle turned up shortly after his death. This clue was a man by the name of Alfred D. Bielek.
Al Bielek’s story is even more bizarre than Allende’s. He claims that he was transported in time to
the future and that here in the future he was brainwashed by the U.S. Navy. This brainwashing led him to believe that his
name was Alfred Bielek, rather than his true name, Edward A. Cameron. Upon discovering his true identity, he
tracked down his brother who had also participated in the experiment. Bielek claims that his brother time traveled
to 1983 and lost his “time-lock.” As a
result, his brother aged one year every hour and eventually died. Bielek then claims that his brother was
reborn. Needless to say, only a small
group of people believe Bielek's story. Nearly
everyone thinks that his stories are based on some truth, but he’s exaggerating
the truth for personal reasons. This
popular opinion seems to be reinforced when Bielek starts remembering things
only after having seen the movie “The Philadelphia Experiment.” Bielek has a Ph.D. in Physics, so he does
have some technical experience. He is
also a retired electrical engineer with thirty years of experience. Because of his obvious intelligence and skill,
he cannot be discounted entirely. Bielek
stated that aliens provided the technology used in the Philadelphia Experiment. However, the germanium transistor, which was
what Bielek said had been used, was invented by Thomas Henry Moray.
Bielek also stated that Dr. Albert Einstein, Dr. John Von
Neumann, and Dr. Nikola Tesla were involved in the project. Some controversy has arisen as to the
participation of Tesla because he died in New
York City on January 7, 1943, which was only a two-month
period of time after the project took place.
Einstein, on the other hand, suggested such a project as this to the U.S.
Navy on several occasions. Because of
this, he was probably involved in the project.
As for Von Neumann, there is no evidence to refute or promote his active
participation in the matter. There is
evidence that supports the fact that he later continued on the experiment at a
different time.
The principle that lay behind the Philadelphia Experiment
was the Unified Field Theory. This
theory states that gravity and magnetism are connected, just as mass and energy
are connected through the formula E = mc2.
The official record states that Einstein never solved the Unified Field
Theory. However, the very nature of the
Philadelphia Experiment suggests otherwise.
It is suspected that Einstein’s Unified Theory has become a government
secret because it demonstrates that both time travel and interstellar space
travel can be performed by manipulating space-time. Space travel can be accomplished without the
assistance of a rocket engine.
*
One fact, which everyone seemed to agree upon, is that a
field was extended many yards, up to perhaps one hundred, outside of the ship
and into the water (anonymous). Everything
inside of this sphere was vague in form and the only visible shape was the hull
of the U.S.S. Eldridge in the water. This
field seemed to have a greenish color and was misty. Another fact everyone agrees was that the U.S.S.
Eldridge did not function properly after the experiment and became a source of
trouble. The last item everyone believes
is that terrible side effects were manifested upon the crew members. However, when one delves deeper into that
particular subject, no one agrees on what the specific details are.
Some witnesses, Allende and Bielek in particular, state that
matter itself was changed and that men were able to walk through physical
objects. When the field was shut off, some
crew members were found stuck in bulkheads, others within the ship’s deck. Some were found with the railings of the ship
stuck through their bodies. It was a
horrendous sight. The sailors supposedly
went crazy after this and raided a bar. They
told the bar-maid their story and completely terrified her. According to Allende, a newspaper article was
written upon the raid, but no specific date was named, so the article cannot be
found. Most crew members went insane, but
a few retained their sanity, only to be thrust into worse situations. One man sat down to dinner with his wife and
child, but then got up from the table, walked through the wall, and was never
seen again. Two others simply
disappeared into thin air and were also never seen again. Another crew member vanished in the middle of
a fight, much to his opponent’s astonishment.
All three incidents had several witnesses.
The worse side effects of the experiment occurred when men
became “stuck” or “locked into” what seems to be another dimensional space. Getting stuck consisted of becoming invisible
and being unable to move, speak, or interact with other people for a period of
time. Allende told about these events in
his letters to Dr. Jessup. The ships
crew members identified the occurrence of “getting stuck” as “Hell
Incorporated” (The Philadelphia Experiment, p. 42). It was also known as the “freeze.” A common freeze would last minutes to hours
and was damaging psychologically, but did not cause madness. A man would only come out of the “freeze” if
other crew members laid their hands upon him to give him strength. Unfortunately, in one instance of the “laying
of hands,” two men who attempted to lay hands upon the man burst into flames
and burned for 18 days (The Philadelphia Experiment, p. 44). The fires could not be stopped, despite
multiple attempts to quench the flames. Needless
to say, the laying of hands was discontinued from that point on. Then, men started going into the “deep freeze,”
when a man would be frozen for several days to several months. During this time, the man is completely aware
of others and their actions but was unable to communicate to them or interact
with them. Men in the “deep freeze” can
only be seen by other crew members. It
only takes 2 days for a man to go completely crazy in the “deep freeze.” The first “deep freeze” took 6 months and
five million dollars worth of research and equipment to correct (The
Philadelphia Experiment, p. 43). The man
who was stuck for 6 months went completely insane by the time he got out. Carlos Allende wrote: “Usually a deep freeze man goes mad, stark
raving, gibbering, running mad, if his freeze is far more than a day in our
time” (The Philadelphia Experiment, p. 42).
Rick Anderson uncovered research that states this disappearance or
freezing of people is the Zeeman Affect.
“Zeemanising - the
Zeeman Effect is defined as spreading out of the spectral lines of atoms under
the influence of a strong magnetic field.”
The few remaining sailors have a high PSI factor which is
intensified by fear or hypnosis. Unfortunately,
they have all been discharged from the U.S. Navy as mentally unfit.