At LAST 3G In INDIA

3G IN INDIA

People in India are looking forward to more information, faster data access and multimedia services through their mobile phones. 3G technology is here to turn this dream into reality. It’s a technology anxiously awaited by telecom operations and subscribers in India.

How long do we have to wait?

Not very long! India is all set to launch 3G mobile telephone services by october 2008 first in four indian metros.

According to Telecom Regulatory Authority of India chairman Nripendra Misra, a total of 32.5 MHz is available for allocation within the next 6-9 months.

Trai has also recommended auctioning 200 MHz for broadband wireless access services like Wimax (worldwide interoperability for microwave access) and has proposed a national frequency management board to oversee spectrum availability and its efficient use.

He hopes that the allocated spectrum would be enough for the next two years and said Trai would recommend freeing up more spectrum for those who lose out in this auction.

So what is 3G spectrum all about?

What is spectrum?

Radio frequency (RF) is a frequency or rate of oscillation within the range of about 3 Hz to 300 GHz. This range corresponds to frequency of alternating current electrical signals used to produce and detect radio waves. Since most of this range is beyond the vibration rate that most mechanical systems can respond to, RF usually refers to oscillations in electrical circuits or electromagnetic radiation.

How is 3G different from 2G and 4G?

While 2G stands for second-generation wireless telephone technology, 1G networks used are analog, 2G networks are digital and 3G (third-generation) technology is used to enhance mobile phone standards.

3G helps to simultaneously transfer both voice data (a telephone call) and non-voice data (such as downloading information, exchanging e-mail, and instant messaging. The highlight of 3G is video telephony. 4G technology stands to be the future standard of wireless devices.

Currently, Japanese company NTT DoCoMo and Samsung are testing 4G communication.

How will 3G services help you?

3G services will enable video broadcast and data-intensive services such as stock transactions, e-learning and telemedicine through wireless communications

All telecom operators are waiting to launch 3G in India to cash in on revenues by providing high-end services to customers, which are voice data and video enabled. India lags behind many Asian countries in introducing 3G services.

What is Trai’s recommendation on 3G pricing?

The Telecom Regulatory Authority of India has recommended auctioning radio frequencies for 3G telecom services at a reserve price of Rs 1,050 crore (Rs 10.50 billion) to companies seeking to offer nationwide high-speed Internet and streaming video.

The base price for spectrum in cities like Mumbai and Delhi and Category A telecom circles is Rs 120 crore (Rs 1200 million); in cities like Chennai and Kolkata and Category B circles Rs 80 crore (Rs 800 million); and in all other cities Rs 15 crore (Rs 150 million).

What are the frequency bands and quota for CDMA?

Trai has recommended three sets of frequency bands – 450 mhz, 800 mhz and 2.1 ghz. For CDMA players like Reliance [Get Quote] and Tata Teleservices 1.25 MHz each is offered. CDMA operators are free to bid both in the 2.1 GHz and the 450 MHz bands, but they will be allocated spectrum only in one. The pricing of these two bands is linked to the auction in the 2.1 GHz band.

CDMA operators will pay the same as the second-highest GSM bidder. And if there is more than one claimant in the 450 MHz band, the reserve price will be half of that arrived at in the 2.1 GHz band. Another rider is that if the highest bid is a quarter more than the lowest, the lowest bidder has to raise its bid to 75 per cent of the winning bid.

But CDMA operators are likely to face problems. Operating 3G services on 450 MHz is a problem because we they do not have dual-band phones that work both in 450 MHz and in 800 MHz (the band in which CDMA operates in India).

What are the issues regarding 3G for providers and users?

3G has successfully been introduced in Europe. But several issues continue to hamper its growth.

High spectrum licensing fees for the 3G services

Huge capital required to build infrastructure for 3G services.

Health impact of electromagnetic waves.

Prices are very high for 3G mobile services.

Will 2G users switch to 3G services.

Takes time to catch up as the service is new.

What are the issues regarding 3G pricing?

Pricing has been a cause of concern. Spectrum auctions ran into billions of euros in Europe. In Europe, spectrum licensing fees were collected years before the 3G service was developed and it required huge investments to build 3G networks, hitting mobile operators’ margins.

However, in Japan and South Korea, spectrum licensing fees were not applicable as the focus of these countries were national IT infrastructure development.

Which companies have applied for 3G license?

3G spectrum has been provided to GSM players like BSNL, MTNL, Bharti, and Vodafone and some international companies have also shown intrest to carry out an interface check on a non-commercial basis ahead of the start of 3G mobile services.

Trial spectrum has been given for a period of one month. This will be only 1/1000th of the actual 3G spectrum capability. Apart frm PSU majors, spectrum for carrying out 3G trials has been given to all those who have applied under the National Frequency Allocation Plan on the 2.1 GHz band. GSM players operate on 900 MHz and 1,800 MHz, while CDMA players operate on 800 MHz.

What is the pricing issue in India?

While Tatas have welcomed Trai’s Rs 1,400-crore (Rs 14 billion) base price for a nationwide rollout of 3G services, the rest of the players find the price too exorbitant.

Bharti-Airtel is disappointed with the pricing as they were expecting it to be Rs 300-400 crore (Rs 3-4 billion). The reserve price is a disincentive for telecom companies in India. Bharti has appealed to lower the prices specially for rural penetration.

The Cellular Operators Association of India and the Association of Unified Service Providers of India are studying TRAI’s recommendations and have not given their comments.

However, Trai chairman Nripendra Misra has said that there is no reason to worry as players will not bid exorbitantly and derail the auction. Misra said telecom operators had matured from their experiences and global developments, and would bid sincerely.

What about the security in a 3G network?

3G networks offer a greater degree of security than 2G predecessors. By allowing the UE to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block crypto instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified.

In addition to the 3G network infrastructure security, end to end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property.

Where was 3G spectrum first introduced?

Japan was the first country to introduce 3G on a large commercial scale. In 2005, about 40 per cent of subscribers used only 3G networks. It is expected that during 2006 the subscribers would move from 2G to 3G and upgrade to the next 3.5 G level.

The success of 3G in Japan also shows that video telephony was the killer application for 3G networks. Downloading music was the biggest draw in 3G services.

In how many countries does 3G exist?

There are about 60 3G networks across 25 countries . In Asia, Europe and the USA, telecom firms use WCDMA technology. The WCDMA standard provides seamless global evolution from today’s GSM with support of the worlds’ largest mobile operators.

WCDMA technology is built on open standards, wide ranging mobile multimedia possibility, and vast potential economies of scale with the support of around 100 terminal designs to operate 3G mobile networks.

3G services were introduced in Europe in 2003.
What speed we can expect?

It is often suggested by industry sources that 3G can be expected to provide 384 kbit/s at or below pedestrian speeds, but only 128 kbit/s in a moving car.

Photo to 3D-PhotoGrammetry

PHOTOGRAMMETRY
Photogrammetry is the technique of measuring objects (2D or 3D) from photo-grammes. We say commonly photographs, but it may be also imagery stored electronically on tape or disk taken by video or CCD cameras or radiation sensors such as scanners.
The results can be:
• coordinates of the required object-points
• topographical and thematical maps
• and rectified photographs (orthophoto).
o Its most important feature is the fact, that the objects are measured without being touched. Therefore, the term „remote sensing“ is used by some authors instead of „photogrammetry“. „Remote sensing“ is a rather young term, which was originally confined to working with aerial photographs and satellite images. Today, it includes also photogrammetry, although it is still associated rather with „image interpretation“.

PhotoGrammetry Slide Show

What Photography gives???

What Photogramettry takes???
Principally photogramettry can be divided into:
1.Depending on the lense-setting:
.Far range photogrammetry (with camera distance setting to indefinite),
.Close range photogrammetry (with camera distance settings to finite values).
2. Another grouping can be
o Aerial photogrammetry (which is mostly far range photogrammetry), and
o Terrestrial Photogrammetry (mostly close range photogrammetry)
The applications of photogrammetry are widely spread. Principally, it is utilized for object interpretation (What is it? Type? Quality? Quantity) and object measurement (Where is it? Form? Size?).
Aerial photogrammetry is mainly used to produce topographical or thematical maps and digital terrain models. Among the users of close-range photogrammetry are architects and civil engineers (to supervise buildings, document their current state, deformations or damages), archaeologists, surgeons (plastic surgery) or police departments (documentation of traffic accidents and crime scenes), just to mentionfew.

2. Brief History of Photogrammetry
1851: Only a decade after the invention of the „Daguerrotypie“ by Daguerre and Niepce, the french officer Aime Laussedat develops the first photogrammetrical devices and methods. He is seen as the initiator of photogrammetry.

1858: The German architect A. Meydenbauer develops photogrammetrical techniques for the documentation of buildings and installs the first photogrammetric institute in 1885 (Royal Prussian Photogrammetric Institute).

1866: The Viennese physicist Ernst Mach publishes the idea to use the stereoscope to estimate volumetric measures.

1885: The ancient ruins of Persepolis were the first archaeological object recorded photogrammetrically.

1889: The first German manual of photogrammetry was published by C. Koppe.

1896: Eduard Gaston and Daniel Deville present the first stereoscopical instrument for vectorized mapping.

1897/98: Theodor Scheimpflug invents the double projection.

1901: Pulfrich creates the first „Stereokomparator“ and revolutionates the mapping from stereopairs.

1903: Theodor Scheimpflug invents the „Perspektograph“, an instrument for optical rectification.

1910: The ISP (International Society for Photogrammetry), now ISPRS, was founded by E. Dolezal in Austria.

1911: The Austrian Th. Scheimpflug finds a way to create rectified photographs. He is considered as the initiator of aerial photogrammetry, since he was the first succeeding to apply the photogrammetrical principles to aerial photographs.

1913: The first congress of the ISP was held in Vienna.
until 1945: development and improvment of measuring (=„metric“) cameras and analogue plotters.

1964: First architectural tests with the new stereometric camera-system, which had been invented by Carl Zeiss, Oberkochen and Hans Foramitti, Vienna.

1964: Charte de Venise.

1968: First international Symposium for photogrammetrical applications to historical monuments was held in Paris – Saint Mandé.

1970: Constitution of CIPA (Comité International de la Photogrammétrie Architecturale) as one of the international specialized committees of ICOMOS (International Council on Monuments and Sites) in cooperation with ISPRS. The two main activists were Maurice Carbonnell, France, and Hans Foramitti, Austria.

1970ies: The analytical plotters, which were first used by U. Helava in 1957, revolutionate photogrammetry. They allow to apply more complex methods: aerotriangulation, bundle-adjustment, the use of amateur cameras etc.

1980ies: Due to improvements in computer hardware and software, digital photogrammetry is gaining more and more importance.

1996: 83 years after its first conference, the ISPRS comes back to Vienna, the town, where it was founded.

Triangulation

    Triangulation is the principle used by both photogrammetry to produce 3-dimensional point measurements. By mathematically intersecting converging lines in space, the precise location of the point can be determined.However;photogrammetry can measure multiple points at a time with virtually nolimit on the number ofsimultaneously triangulated points.In the case of theodolites, two angles are measured to generate a linefrom each theodolite. In the case of photogrammetry, it is the twodimensional(x, y) location of the target on the image that is measured to produce this line. By taking pictures from at least two different locations

    and measuring the same target in each picture a “line of sight” is developed from each camera location to the target. If the camera location and aiming direction are known (we describe how this is done in Resection), the lines can be mathematically intersected to produce the XYZ coordinates of each targeted point.

    However, the accuracy of a photogrammetric measurement can vary significantly since accuracy depends on several inter-related factors. The
    most important are:
    1. The resolution (and quality) of the camera you are using,
    2. The size of the object you’re measuring,
    3. The number of photographs you’re taking, and
    4. The geometric layout of the pictures relative to the object and to each other.
    The diagram below illustrates the effects of the four factors and their influence on accuracy.

    The diagram represents a pyramid with the four factors at the base of the pyramid and high accuracy at the top of the pyramid. To get higher accuracy ( a higher pyramid) you need more of the items shown on the lines of the pyramid (higher resolution, smaller size, more photos, and wider, but not too wide, geometry).

    Types of Measurements
    Photogrammetry is a versatile, powerful, and flexible measuring technology. Measurements have been done on land, sea (and undersea), and air, and even in outer space on objects smaller than a
    football to larger than a football field. Photogrammetry is widelyused in the aerospace,
    antenna, shipbuilding, construction, and
    automotive industries for a wide variety of measurement tasks.

    Objects to be measured
    Although every photogrammetric project is somewhat different, we have separated them into broad categories to help describegeneralapproaches for performing a successful measurement.

    Measurements can be classified as initial or repeat, and as completely overlapping or partially overlapping. The two categories are not mutually
    exclusive; initial measurements can be completely overlapping or partially overlapping, and so can repeat measurements. In general, a completely
    overlapping, repeat measurement is the easiest type of measurement while an initial, partially overlap Principally, photogrammetry can be divided into: ping measurement is the most difficult.

    3. Description of photogrammetrical techniques

3.1. Photographing Devices
A photographic image is a „central perspective“. This implies, that every light ray, which reached the film surface during exposure, passed through the camera lens (which is mathematically considered as a single point, the so called „perspective center“). In order to take measurements of objects from photographs, the ray bundle must be reconstructed. Therefore, the internal geometry of the used camera (which is defined by the focal length, the position of the principal point and the lens distortion) has to be precisely known. The focal length is called „principal distance“, which is the distance of the projection center from the image plane´s principal point. Depending on the availability of this knowledge, the photogrammetrist divides photographing devices into three categories:

3.1.1. Metric cameras

They have stable and precisely known internal geometries and very low lens distortions. Therefore, they are very expensive devices. The principal distance is constant, which means, that the lens cannot be sharpened when taking photographs. As a result, metric cameras are only usable within a limited range of distances towards the object. The image coordinate system is defined by (mostly) four fiducial marks, which are mounted on the frame of the camera. Terrestrial cameras can be combined with tripods and theodolites. Aerial metric cameras are built into aeroplanes mostly looking straight downwards. Today, all of them have an image format of 23 by 23 centimeters.

3.1.2. Stereometric camera
If an object is photographed from two different positions, the line between the two projection centers is called „base“. If both photographs have viewing directions, which are parallel to each other and in a right angle to the base (the so called „normal case“), then they have similar properties as the two images of our retinas. Therefore, the overlapping area of these two photographs (which are called a „stereopair“) can be seen in 3D, simulating man´s stereoscopic vision.
In practice, a stereopair can be produced with a single camera from two positions or using a stereometric camera.
A stereometric camera in principle consists of two metric cameras mounted at both ends of a bar, which has a precisely measured length (mostly 40 or 120 cm). This bar is functioning as the base. Both cameras have the same geometric properties. Since they are adjusted to the normal case, stereopairs are created easily.

3.1.3. „Amateur“ cameras
The photogrammetrist speaks of an „amateur camera“, when the internal geometry is not stable and unknown, as is the case with any „normal“ commercially available camera. However, also these can be very expensive and technically highly developed professional photographic devices. Photographing a test field with many control points and at a repeatably fixed distance setting (for example at infiniy), a „calibration“ of the camera can be calculated. In this case, the four corners of the camera frame function as fiducials. However, the precision will never reach that of metric cameras. Therefore, they can only be used for purposes, where no high accuracy is demanded. But in many practical cases such photography is better than nothing, and very useful in cases of emergency.
3.2. Photogrammetric Techniques
Depending on the available material (metric camera or not, stereopairs, shape of recorded object, control information…) and the required results (2D or 3D, accuracy…), different photogrammetric techniques can be applied. Depending on the number of photographs, three main-categories can be distinguished.

3.2.1. Mapping from a single photograph

Only useful for plane (2D) objects. Obliquely photographed plane objects show perspective deformations which have to be rectified. For rectification exists a broad range of techniques. Some of them are very simple. However, there are some limitations. To get good results even with the simple techniques, the object should be plane (as for example a wall), and since only a single photograph is used, the mappings can only be done in 2D
The rectification can be neglected, only if the object is flat and the picture is made from a vertical position towards the object. In this case, the photograph will have a unique scale factor, which can be determined, if the length of at least one distance at the object is known.
Very shortly, we will describe now some common techniques:
• Paper strip method

This is the cheapest method, since only a ruler, a piece of paper with a straight edge and a pencil are required. It was used during the last century. Four points must be identified in the picture and in a map.From one point, lines have to be drawn to the others (on the image and the map) and to the required object point (on the image). Then the paper strip is placed on the image and the intersections with the lines are marked. The strip is then placed on the map and adjusted such that the marks coincide again with the lines. After that, a line can be drawn on the map to the mark of the required object point. The whole process is repeated from another point, giving the object-point on the map as intersection of the two object-lines.
• Optical rectification

Is done using photographic enlargeners. These should fulfill the so called „Scheimpflug condition“ and the „vanishing-point condition“. Again, at least four control points are required, not three on one line. The control points are plotted at a certain scale. The control point plot is rotated and displaced until two points match the corresponding object points from the projected image. After that, the table has to be tilted by two rotations, until the projected negative fits to all control points. Then an exposure is made and developed.
• Numerical rectification

Again, the object has to be plane and four control points are required. At the numerical rectification, the image coordinates of the desired object-points are transformed into the desired coordinate system (which is again 2D). The result is the coordinates of the projected points. Differential rectification If the object is uneven, it has to be divided into smaller parts, which are plane. Each part can then be rectified with one of the techniques shown above. Of course, also even objects may be rectified piecewise, differentially. A prerequisite for differential rectification is the availability of a digital object model, i.e. a dense raster of points on the object with known distances from a reference plane; in aerial photogrammetry it is called a DTM (Digital Terrain Model).
Monoplotting

This technique is similar to the numerical rectification, except that the coordinates are here transformed into a 3D coordinate system. First, the orientation elements, that are the coordinates of the projection center and the three angles defining the view of the photograph, are calculated by spatial resection. Then, using the calibration data of the camera, any ray, that came from the archaeological feature through the lense onto the photograph can be reconstructed and intersected with the digital terrain model.
Digital rectification

The digital rectification is a rather new technique. It is somehow similar to „monoplotting“. But here, the scanned image is transformed pixel by pixel into the 3D real-world coordinate system. The result is an orthophoto, a rectified photograph, that has a unique scale.

3.2.2. Stereophotogrammetry

As the term already implies, stereopairs are the basic requirement, here. These can be produced using stereometric cameras. If only a single camera is available, two photographs can be made from different positions, trying to match the conditions of the „normal case“. Vertical aerial photographs come mostly close to the „normal case“. They are made using special metric cameras, that are built into an aeroplane looking straight downwards. While taking the photographs, the aeroplane flies over a certain area in a meandric way, so that the whole area is covered by overlapping photographs. The overlapping part of each stereopair can be viewed in 3D and consequently mapped in 3D using one of following techniques:
Analogue

The analogue method was mainly used until the 70ies of our century. Simply explained, the method tries to convert the recording procedure. Two projectors, which have the same geometric properties as the used camera (these can be set during the so called „inner orientation“), project the negatives of the stereopair. Their positions then have to be exactly rotated into the same relationship towards each other as at the moment of exposure (=„relative orientation“). After this step, the projected bundle of light rays from both photographs intersect with each other forming a (three dimensional optical) „model“. At last, the scale of this model has to be related to its true dimensions and the rotations and shifts in relation to the mapping (world) coordinate system are to be determined. Therefore, at least three control points, which are not on one straight line, are required (=„absolute orientation“).
The optical model is viewed by means of a stereoscope. The intersection of rays can then be measured point by point using a measuring mark. This consists of two marks, one on each photograph. When viewing the model, the two marks fuse into a 3D one, which can be moved and raised until the desired point of the 3D object is met. The movements of the mark are mechanically transmitted to a drawing device. In that way, maps are created.
Analytical

The first analytical plotters were introduced in 1957. From the 1970ies on, they became commonly available on the market. The idea is still the same as with analogue instruments. But here, a computer manages the relationship between image- and real-world coordinates. The restitution of the stereopair is done within three steps:
After restoration of the “inner orientation”, where the computer may now also correct for the distortion of the film, both pictures are relatively oriented. After this step, the pictures will be looked at in 3D. Then, the absolute orientation is performed, where the 3D model is transferred to the real- world coordinate system. Therefore, at least three control points are required.

After the orientation, any detail can be measured out of the stereomodel in 3D. Like in the analogue instrument, the model and a corresponding measuring mark are seen in 3D. The movements of the mark are under your control. The main difference to the former analogue plotting process is that the plotter doesn´t plot any more directly onto the map but onto the monitors screen or into the database of the computer.
The analytical plotter uses the computer to calculate the real-world coordinates, which can be stored as an ASCII file or transferred on-line into CAD-programs. In that way, 3D drawings are created, which can be stored digitally, combined with other data and plotted later at any scale.
Digital

Digital techniques have become widely available during the last decade. Here, the images are not on film but digitally stored on tape or disc. Each picture element (pixel) has its known position and measured intensity value, only one for black/white, several such values for colour or multispectral images.

3.2.3.Mapping.from.several.photographs

This kind of restitution, which can be done in 3D, has only become possible by analytical and digital photogrammetry. Since the required hard- and software is steadily getting cheaper, it´s application fields grow from day to day.
Here, mostly more than two photographs are used. 3D objects are photographed from several positions. These are located around the object, where any object-point should be visible on at least two, better three photographs. The photographs can be taken with different cameras (even „amateur“ cameras) and at different times (if the object does not move).
Technique

As mentioned above, only analytical or digital techniques can be used.
During all methods, first a bundle adjustment has to be calculated. Using control points and triangulation points the geometry of the whole block of photographs is reconstructed with high pecision. Then the image coordinates of any desired object-point measured in at least two photographs can be intersected. The result are the coordinates of the required points.
In that way, the whole 3D object is digitally reconstructed.

Frequently Asked Questions About
Photogrammetry
This section lists frequently asked questions about photogrammetry.
1. How many photographs are needed for a measurement?
2. How many points are needed for a measurement?
3. Do I need scale for the measurement? How do I get it?
4. How do I compensate for scale changes due to temperature?
5. Can the object move while it is being measured?
6. Do I need to use special targets with the system? Can I measure untargeted features?
7. What size should the targets be? Can I use different size targets on the same measurement?
8. How obliquely can I view the targets?
9. Do I need to provide special lighting for the system? Do I have to consider the lighting where the measurement is being taken?
10. Do I need to know where the camera is located when I take a photograph? How steady must the camera be when taking apicture?
11. How far away do I have to get from the object to measure it?
Where should I locate the camera to get a good measurement?
12.
How can I calibrate the camera and make sure the measurement is accurate?

How many photographs are needed for a
measurement?

As V-STARS measures by triangulation, in theory only two photographs are needed, for a measurement. However, we recommend you take a
minimum of four to six photographs. With four to six photographs you can self-calibrate the camera. Self-calibration is a powerful technique in
which the camera is calibrated as a by-product of the measurement. This
allows the camera to be calibrated at the time of measurement under
the conditions that exist at the time of the measurement. In order to selfcalibrate the camera you must take a minimum of six photographs if the object is essentially flat, and a minimum of four photographs if the object isn’t flat. Extra photographs also produce a more accurate and reliable measurement, and typically take little more time to measure so go ahead and take them

How many points are needed for a
measurement?

To get a good solution, we recommend measuring a minimum of twelve
well-distributed points (and preferably fifteen to twenty) in each
photograph. Also, the entire measurement should have at least twenty
(preferably thirty) well-distributed points. When in doubt add more points.
It’s quick and easy to do, so go ahead and do it.
Of course, measuring more points will lead to a better solution, however
you quickly reach a point of diminishing returns. In most cases, measuring more than forty well-distributed points in each photograph, and more than sixty well distributed points overall will not significantly improve the solution.Notice we always qualify the number of points with the term well distributed. The distribution of the points can often be much more important than the number of points. It is better, for example, to have twenty points which are spread out over the entire area being measured than to have fifty clustered in one small area and fifty more clustered in another small area. Points which are added only to improve the distribution of points are usually called “fill-in” points.

Do I need scale for the measurement?

Whether you need scale for the measurement depends on the
application, but most applications do need to scale the measurement.
To get scale, you must provide V-STARS with at least one known distance between two measured points. You can specify a virtually unlimited number of scale distances, and we recommend you use at least two scale distances, whenever possible, to provide redundancy. Of course,
the scale points are like any other points; they must be measured and
triangulated. They do not have to be measured in all the photographs to
be triangulated, and they do not have to be seen in the same
photographs. They simply must be seen in at least two of the entire set of
photographs so they can be triangulated. Of course, for best results, you
should try to see them in at least three or more photographs with good
geometry.
Often, to get scale for the measurement, bars with targets located on
them at precisely known distances are placed on or around the object.
This is often not a trivial matter. Placing the Scale Bars on or near the
object without obscuring other targets or being itself obscured can
sometimes be difficult. One must also be careful to ensure the scale
targets fit onto the photographs since they often are placed around the
periphery of the object, or extend outside the boundaries of the object
being measured. For the best results, the Scale Bar(s) should be
comparable to the size of the object being measured.
Finally, it is very important to realize the Scale Bar(s) must be rigidly
attached to the object being measured. That is, a Scale Bar CANNOT
move relative to the object being measured while the object is being
measured. If it does move during this time, the scale measurements will
be corrupted, and can’t be used. (If the Scale Bar has moved during the
measurement, the operator will be able to detect the movement when
looking at the measurement results).

How do I compensate for scale changes due
to temperature?

If the Scale Bar is made of the same material as the object being
measured, applying the scale distance(s) should scale the entire object to
the temperature at which the Scale Bar was calibrated. If you want to
scale the measurement to another temperature (for example, the
ambient temperature at the time of measurement), you can apply the
temperature coefficient of the Scale Bar material to the calibrated Scale
Bar distance.
If the Scale Bar material is made of a different material than the measured
material, then you must apply the temperature coefficient of the Scale
Bar material to the calibrated Scale Bar distance to get the true distance
at the ambient temperature. Then, you can scale the measured material
to any temperature by applying the temperature coefficient of the
measured material to the object measurement. However, in both cases,
we have assumed the measured object and the Scale Bar are both at the
same temperature. If the two have significantly different thermal masses,
and the temperature has changed significantly, this assumption will not
hold. Fortunately most measurements are completed so quickly that
there will be very little scale change due to temperature.

Can the object move while it is being
measured?

Yes, under certain circumstances. The object can move during the
measurement as long as it moves as a rigid body. That is, the entire
object cannot undergo any deformation when it is moved. Sometimes,
this feature of V-STARS can be used to simplify a measurement by movingthe object relative to the camera, rather than moving the camera around the object. For, example if an object is mounted on a turntable, the
camera can remain stationary and the object can be rotated to several
positions with the turntable. Of course, the object must be rigid enough to
maintain its shape when being rotated.
If the object is moved, it is important that the Scale Bars be mounted so
that they move with the object. If not, the scale measurement is
corrupted, and can’t be used.

Do I need to use special targets with the
system?

The V-STARS system measures special targets made of a thin 0.1mm thick(0.004″), flat, grayish colored retro-reflective material. This material has several advantages over conventional targets (typically a white circle on a black background). The retro-reflective material returns light very efficiently to the light source (they are similar in principle and operation to high way reflectors only much more efficient), and is typically 100 to 1000times more efficient at returning light than a white target. A relatively low powered strobe located at the camera lens is used to illuminate the targets, and makes exposure of the targets independent of the ambient light level. This means the object can be photographed in bright light or total darkness, and the target exposure will be the same.
Furthermore, the strobe power is low enough that the strobe does not
normally significantly illuminate the object. Thus, the target and object
exposure are largely independent with target exposure provided by the
strobe, and object exposure provided by the ambient light. By setting the
shutter exposure time appropriately you can expose the object to
whatever level you desire. You can make a normal exposure, but usually
you will want to underexpose the object significantly to make the target
measurement easier and more reliable. Then, you can use the
enhancement features available in V-STARS to enhance the object.

What size should the targets be?
Can I use different size targets on the same
measurement?

The target size depends on the distance from the camera to the object.
A rough rule of thumb is to use a target 2 millimeter (0.040″) in diameter for every meter of object size. For example, you should use a 6 mm diameter target for a 3 meter object. If necessary, you can use smaller target sizes by increasing the strobe power. For best results, we recommend you try to use the same size targets on a measurement whenever possible. However, target sizes which vary by up to 2 to 1 in size are usually acceptable..

How obliquely can I view the targets?
Although retro-reflective targets have several advantages over
conventional targets (see question above) they tend to lose their special
reflective properties when viewed too obliquely and become dim and
unmeasurable. The targets shouldn’t be viewed from more than 60 to 65°
off-axis for the best results.

Do I need to provide special lighting for the
system?
Do I have to consider the lighting during the measurement?

The strobe system provided with V-STARS is all that is needed to illuminate
the targets, and the target exposure is independent of the ambient light.
However, you should set the shutter time to underexpose the background.
This makes the targets easier to find and measure.

Do I need to know the camera location when I
take a picture?

How steady must the camera be when taking
a picture?

You don’t have to know where the camera is since V-STARS figures out
where the camera is located automatically using GSI’s AutoStart
procedure. With AutoStart, the operator only has to measure four known
points (which can’t be collinear) on the image and V-STARS will figure out
where the camera is. If you don’t have good coordinates for any points
on the object (a first time measurement, for example) you can use our
AutoBar to get the camera location.
Since the targets are illuminated by a nearly instantaneous flash from the
strobe, the camera doesn’t have to be steady. This is one of the greatest
advantages of photogrammetry over other large-volume, high-accuracy
measurement technologies. The camera can be used on scaffolding, lifts,
ladders, etc. and can be used in environments where movement or
vibration is occurring.

How far away do I have to get from the object
to measure it?
Where should I locate the camera to get a
good measurement?

The distance from the camera to the object is very easy to determine.
Simply get back far enough to see the object you want to measure (or
the part of the object you want to measure if you are measuring the
object in sections). As a rule of thumb, you will need to get the same
distance back from the object as the size of the object. For example, you
will need to get about ten feet back to measure a ten foot object. See
Field of View for more details.
If you haven’t done so already, read question 1 above about factors
affecting accuracy, especially the fourth factor regarding geometry. Of
course, getting good geometry isn’t the only consideration when
considering where to locate the camera for a good measurement. You
must also locate the camera so every target is ultimately seen in at least
two (preferably four) photographs with strong geometry. On objects with
lots of blockage and or complex surfaces, figuring out where to locate the
cameras to get a good measurement can be a challenge.

How can I calibrate the camera and make sure
the measurement is accurate?

V-STARS normally automatically calibrates the camera as a byproduct of
the measurement in a process called self-calibration. Self-calibration is a
very powerful technique that allows the camera to be calibrated at the
time of measurement under the conditions that exist at the time of the
measurement. In order to self-calibrate the camera you must take a
minimum of six photographs if the object is essentially flat, and a minimum
of four photographs if the object isn’t flat. If self-calibration can’t be usedon a particular measurement, pre-calibrated values can be used but
Accuracies may be somewhat lower..
V-STARS also provides internal estimates of accuracy for each measured
point. These internal estimates of accuracy have been extensively
compared to external measures of accuracy (repeatability, artifacts,
known distances, measurements by other systems, etc.) and have been
found to be consistent and reliable. This is important because often in
everyday measurements one does not have access to external measures
of accuracy and must rely on the internal accuracy estimate as a quality indicator.

Assurgent Technology Solutions (P) Ltd. a software company deals with applications concerned with photogrammetry.