Micasense RedEdge-P Multispectral Kit + DJI Skyport for M300
The RedEdge-P is a single camera solution for synchronized capture of calibrated high-resolution multispectral and RGB imagery, with an optimized FOV and capture rate for efficient flights. It seamlessly integrates a high-resolution all-color imager with synchronized multispectral imagers to enable pan-sharpened RGB and multispectral outputs at 2cm GSD from 60m.
The RedEdge-P SkyPort comes ready to integrate with the DJI M300 enabling RTK capabilities.
|350 g (12.3 oz) RedEdge-P + DLS 2|
|DIMENSIONS:||8.9 x 7.0 x 6.7 cm (3.5in x 2.8in x 2.6in)|
|RGB OUTPUT*:||5.1 MP (global shutter, aligned with all bands)|
|SENSOR RESOLUTION:||1456 x 1088 (1.6MP per MS band), 2464 x 2056 (5.1MP panchromatic bana)|
|GROUND SAMPLE DISTANCE:
|8 cm per pixel (per MS band) at 120m (-400 tt AGL), 4 cm per PIxel (panchromatic band) at 120m (-400 ft) AGL|
FIELD OF VIEW:
|50 HFOV x 38 VFOV (MS), 44 HFOV x 38 VFOV (PAN)|
|EXTERNAL POWER:||7.0V- 25.2 V|
|POWER INPUT:||5.5/7.0/10W (standby, average, peak)|
|CAPTURE RATE:||Three captures per second raw DNG|
|select from trigger input, PPS input, PPS output, and top of frame signals. Host virtual button. USB 2.0 port for WiFi. Serial. 10/100/1000 Ethernet. CF Express tor storag9e|
|Blue (475 nm center, 32 nm bandwidth), Green (560 nm center, 27 nm bandwidth), Red (668 nm center, 14 nm bandwidth), Red Edge (1/ nm center, 12 nm bandwidth), Near-IR (842 nm center, 5/ nm bandwidth)|
*With appropriate post-processing
**Note: Specifications are subject to change without notice
What drones can I use with MicaSense sensors?
There are a number of different options for drones that integrate with MicaSense sensors. As a general rule, if the drone can carry the weight of the sensor and can supply appropriate power to the camera then it will work to carry the sensor. MicaSense provides integration kits for commonly used DJI drones, such as common models like the Matrice 300. MicaSense also has a number of integration partners who provide specialized aircraft with MicaSense sensors integrated.
Check out our Integration Kits page for more information about the kits we offer or our Partner’s page for a list of these partners.
There are a myriad of data processing options available for data captured with our sensors, whether on a computer or in the cloud. The most commonly used local data processing software platforms are Pix4D and Agisoft.
If you prefer to process in the cloud, options like Solvi, Pix4Dfields, Aerobotics, and Measure Ground Control are available. These services generally have more user-friendly analysis tools for specific workflows–for example, Solvi has zonal statistics capabilities and Aerobotics provides tools specific to tree crops.
For more information on processing and analytics platforms visit our Software Providers Page.
What is the max flight speed I can use with MicaSense sensors?
Our sensors can fly quite fast without blurring problems. At 120 meters, generally it’s good to stay under 20 m/s. The limiting factor on flight speed is usually the ability of the sensor to trigger fast enough to get adequate overlap (70%) for processing the data.
What are the advantages of the DJI SkyPort adapter versus a standard integration kit?
If you use a DJI Matrice 200 or a Matrice 210 exclusively and are going to be using different payloads often, SkyPort makes attachment and removal of the camera really simple. It also provides power and triggering communication to the camera.
Can I use Altum’s thermal sensor independently from the multispectral sensor?
Altum captures synchronized thermal and multispectral imagery, thus it is not possible to turn off the multispectral cameras so you will get 6 images for every picture.
Do you have a store near me or do I purchase directly from you?
To purchase our products you can either buy directly from our website or from one of our drone integrator or reseller partners. Our drone partners are located all over the world; visit our partners page to find a reseller local to your area.
How can a MicaSense camera help my business?
Walking an entire field to monitor crops is time-consuming and labor-intensive. MicaSense sensors can capture data faster, which enables early detection of potential issues with things like water, pests, nutrient deficiencies, and input applications. To learn more about how MicaSense sensor can help your business, visit blog.micasense.com
Can you use MicaSense cameras from an airplane or helicopter?
Our cameras have been used by customers from manned aircraft. If you have the ability to mount the camera and DLS 2 as well as provide power, it can be done. You can find more information on the technical integration in our Knowledge Base.
Is there a direct live feed of Altum’s Multispectral and Thermal Bands?
You can get a live stream via the API, but the video rate would be quite slow. If you do not save to the SD card, it would be around 3Hz. The raw data has not been processed so it would not appear as an RGB video stream. The intention is really to analyze the data post-processing, once the bands have been aligned and the images have been stitched.
What is the Downwelling Light Sensor and how does it work?
The Downwelling Light Sensor (DLS) is a sensor that helps improve reflectance calibration in situations where ambient light conditions are changing in the middle of a flight. The DLS records data on the amount of light from the sky capturing this data throughout the flight (embedded within the metadata of each image for each band). In post-processing, data from the DLS is used to effectively correct global changes in lighting conditions, such as when the sky is completely overcast and irradiance fluctuates during a flight. However, it cannot reliably correct for partly cloudy days where popcorn clouds may shadow part of the earth but not shadow the sensor.
What are some common uses and applications of MicaSense sensors?
MicaSense sensors are mainly used for vegetation health mapping. They have multiple uses and applications like agriculture, forestry, environmental monitoring, and even archaeology. Visit our blog to learn how people are using our sensors in a case by case basis.
What is multispectral imaging?
The colors we see in light are defined by the wavelength of that light. Plants absorb and reflect light differently depending on this wavelength. Plants typically absorb blue light and red light, while reflecting some green light. They also reflect a much larger amount of near-infrared (NIR) light, which is not visible to the human eye but is visible to multispectral cameras like RedEdge and Altum. The reflectance curve of a typical plant is shown below. Reflectance is the percentage of light that is reflected by the plant.
By measuring the reflectance of a plant at different wavelengths, multispectral imaging enables identification of areas of stress in a crop, and provides a quantitative metric for the vigor of a plant.
How do multispectral cameras work?
Multispectral cameras work by imaging different wavelengths of light. Professional multispectral cameras have multiple imagers, each with a special optical filter that allows only a precise set of light wavelengths to be captured by that imager. The output of the camera is a set of images for that particular wavelength. These sets of images are then stitched together to create geographically accurate mosaics, with multiple layers for each wavelength. Mathematically combining these layers yields vegetation indices. There are many types of vegetation indices that measure different characteristics of a plant.
How do professional multispectral cameras differ from single-imager multispectral cameras?
A single-imager multispectral camera uses a blocking filter combined with the standard camera’s built-in filter to capture information in 3 wavelengths of light. Because these imagers aren’t optimized for remote sensing, the built-in filters are wideband and suffer from data contamination from neighboring bands.
A professional multispectral camera like RedEdge or Altum uses narrowband filters with known characteristics combined with factory calibration parameters, enabling accurate measurements of reflectance that a converted camera simply cannot match.
Can MicaSense sensors be used in greenhouses?
While it is possible to capture single images in a greenhouse, our sensors are optimized for use in drones, meaning they are designed to look at plants from a distance of more than 30 meters. When taking images in a greenhouse, the subjects are much closer, and the imagery is likely to have parallax effects. This can cause complications in generating indices and composites.
Can MicaSense sensors be used over water?
Wind and sun reflection make capturing data over water with RedEdge-MX and Altum a complex process due to potential distortion and pixel saturation. However, the bands included in the Dual Camera Imaging System make it possible to identify and map vegetation in shallow water environments opening multispectral imagery to uses like riparian vegetation and coastal land mapping. However, when it comes to open water, stitching can be difficult due to pixel saturation. Some people use buoys as ground control points to help stitch together the imagery later.
What is the panchromatic band, and what are the advantages?
The panchromatic sensor, found on the RedEdge-P and the Altum-PT, is sensitive to all colors in the visible through near-infrared spectrum. The panchromatic sensor enables higher resolution without large lenses and imagers for each of the multispectral bands, which optimizes the camera’s weight and minimizes data volume.
How does pan sharpening work?
Pan sharpening uses a higher-resolution panchromatic image fused with lower-resolution multiband images to create a higher-resolution raster dataset. There are many proven methods for pan sharpening (which we won’t cover here).
RedEdge-P or Altum-PT will have the best spectral accuracy at the original multispectral resolution. Some pan-sharpening methods are better than others at preserving spectral quality, and in the case of RE-P and AL-PT, the multispectral bands do not cover the full panchromatic band because there are gaps between the multispectral bands. This means that there may be spectral information between multi-spectral bands that the panchromatic imager measures but the multi-spectral imagers do not. We have found that in practice, for many vegetation tasks, this difference is not significant.
The spatial resolution increase after pan-sharpening enables a host of applications that require higher spatial resolutions, such as AI/ML based methods for classification, counting, and methods where 3D information from the point cloud can be used.
If the panchromatic band senses the full visible light spectrum, why does the image look black & white?
The panchromatic bands found in the RedEdge-P and Altum-PT are monochromatic imagers, not RGB. These imagers do capture a wide bandwidth of visible light information, but the image preview will be in black & white – just like the multispectral images.
What makes a quality remote sensing image?
In remote sensing imagery, the quality of an image is determined by three categories: spatial, spectral, and temporal resolution.
Spatial resolution refers to the number of pixels in an image and the amount of detail the data provides. Temporal resolution refers to how often information can be collected and recorded. And spectral resolution indicates how much of the electromagnetic spectrum is recorded.
The spectral quality of our multispectral sensors provides a great deal of information on plant health. For Machine Learning (ML) and Artificial Intelligence (AI) applications, enhanced spatial resolution can provide improved results. Today most users that need this improved spatial resolution fly with a RedEdge-MX or Altum sensor along with an RGB camera. While providing higher resolution, these payloads do not offer synchronized capture of imagery, resulting in challenges aligning the RGB and multispectral data in post-processing. This is where our new panchromatic sensors come in, opening the door to new applications.
What are the applications of the panchromatic band? Can it see things which other sensors cannot?
The major advantage of the panchromatic band in the RedEdge-P and Altum-PT is that it enables higher resolution imagery without the need for larger, more expensive lenses and imagers for each multispectral band. Being able to visualize your data at leaf-level resolution opens a whole new level of analysis and insights.
The panchromatic band senses a wide range of visible light at high resolution, compared to the narrow band filters of the multispectral bands.
Does the pan sharpening happen in the camera?
No. There are no image processing techniques applied to the images in the camera. The pan-sharpening happens entirely in post-processing, using the raw data captured from the cameras.
Need something custom?
Talk to one of our camera integration specialists about the right configuration for you. Click on the button below!