Catálogo de publicaciones - revistas
The Photogrammetric Record
Resumen/Descripción – provisto por la editorial en inglés
The Photogrammetric Record is an international journal containing original, independently and rapidly refereed articles that reflect modern advancements in photogrammetry, 3D imaging, computer vision, and other related non-contact fields. All aspects of the measurement workflow are relevant, from sensor characterisation and modelling, data acquisition, processing algorithms and product generation, to novel applications. The journal provides a record of new research which will contribute both to the advancement of photogrammetric knowledge and to the application of techniques in novel ways. It also seeks to stimulate debate though correspondence, and carries reviews of recent literature from the wider geomatics discipline.Palabras clave – provistas por la editorial
Photogrammetry; laser scanning; lidar; range imaging; optical metrology; GIS; remote sensing; digita
Disponibilidad
Institución detectada | Período | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | desde ene. 1953 / hasta dic. 2023 | Wiley Online Library |
Información
Tipo de recurso:
revistas
ISSN impreso
0031-868X
ISSN electrónico
1477-9730
Editor responsable
John Wiley & Sons, Inc. (WILEY)
País de edición
Reino Unido
Fecha de publicación
1953-
Tabla de contenidos
doi: 10.1111/phor.12457
Underground hyperspectral outcrop scanning for automated mine‐face mapping: The lithium deposit of Zinnwald/Cínovec
Moritz Kirsch; Maria Mavroudi; Sam Thiele; Sandra Lorenz; Laura Tusa; René Booysen; Erik Herrmann; Ayoub Fatihi; Robert Möckel; Thomas Dittrich; Richard Gloaguen
<jats:title>Abstract</jats:title><jats:p>The inherent complexity of underground mining requires highly selective ore extraction and adaptive mine planning. Repeated geological face mapping and reinterpretation throughout mine life is therefore routine in underground mines. Hyperspectral imaging (HSI) has successfully been applied to enhance geological mapping in surface mining environments, but remains a largely unexplored opportunity in underground operations due to challenges associated with illumination, wet surfaces and data corrections. In this study, we propose a workflow that paves the way for the operational use of HSI in active underground mines. In a laboratory set‐up, we evaluated different hyperspectral sensors and lighting set‐ups as well as the effect of surface moisture. We then acquired hyperspectral data in an underground mine of the Zinnwald/Cínovec Sn‐W‐Li greisen‐type deposit in Germany. These data were corrected for illumination effects, back‐projected into three dimensions and then used to map mineral abundance and estimate Li content across the mine face. We validated the results with handheld laser‐induced breakdown spectroscopy. Despite remaining challenges, we hope this study will help establish hyperspectral sensors in the extractive industry as a means to increase the volume and efficiency of raw material supply, advance digitalisation, and reduce the environmental footprint and other risks associated with underground mining.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12460
A point cloud filtering method based on anisotropic error model
Mustafa Ozendi; Devrim Akca; Hüseyin Topan
<jats:title>Abstract</jats:title><jats:p>Many modelling applications require 3D meshes that should be generated from filtered/cleaned point clouds. This paper proposes a methodology for filtering of terrestrial laser scanner (TLS)‐derived point clouds, consisting of two main parts: an anisotropic point error model and the subsequent decimation steps for elimination of low‐quality points. The point error model can compute the positional quality of any point in the form of error ellipsoids. It is formulated as a function of the angular/mechanical stability, sensor‐to‐object distance, laser beam's incidence angle and surface reflectivity, which are the most dominant error sources. In a block of several co‐registered point clouds, some parts of the target object are sampled by multiple scans with different positional quality patterns. This situation results in redundant data. The proposed decimation steps removes this redundancy by selecting only the points with the highest positional quality. Finally, the Good, Bad, and the Better algorithm, based on the ray‐tracing concept, was developed to remove the remaining redundancy due to the Moiré effects. The resulting point cloud consists of only the points with the highest positional quality while reducing the number of points by factor 10. This novel approach resulted in final surface meshes that are accurate, contain predefined level of random errors and require almost no manual intervention.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12463
Floor plan creation using a low‐cost 360° camera
Jakub Vynikal; David Zahradník
<jats:title>Abstract</jats:title><jats:p>The creation of a 2D floor plan is an integral part of finishing a building construction. Legal obligations in different states often include submitting a precise floor plan for ownership purposes, as the building needs to be divided between new residents with reasonable precision. Common practice for floor plan generation includes manual measurements (tape or laser) and laser scanning (static or SLAM). In this paper, a novel approach is proposed using spherical photogrammetry, which is becoming increasingly popular due to its versatility, low cost and unexplored possibilities. Workflow is also noticeably faster than other methods, as video acquisition is rapid, on a par with SLAM. The accuracy and reliability of the measurements are then experimentally verified, comparing the results with established methods.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12464
Weak texture remote sensing image matching based on hybrid domain features and adaptive description method
Wupeng Yang; Yongxiang Yao; Yongjun Zhang; Yi Wan
<jats:title>Abstract</jats:title><jats:p>Weak texture remote sensing image (WTRSI) has characteristics such as low reflectivity, high similarity of neighbouring pixels and insignificant differences between regions. These factors cause difficulties in feature extraction and description, which lead to unsuccessful matching. Therefore, this paper proposes a novel hybrid‐domain features and adaptive description (HFAD) approach to perform WTRSI matching. This approach mainly provides two contributions: (1) a new feature extractor that combines both the spatial domain scale space and the frequency domain scale space is established, where a weighted least square filter combined with a phase consistency filter is used to establish the frequency domain scale space; and (2) a new log‐polar descriptor of adaptive neighbourhood (LDAN) is established, where the neighbourhood window size of each descriptor is calculated according to the log‐normalised intensity value of feature points. This article prepares some remote sensing images under weak texture scenes which include deserts, dense forests, waters, ice and snow, and shadows. The data set contains 50 typical image pairs, on which the proposed HFAD was demonstrated and compared with state‐of‐the‐art matching algorithms (RIFT, HOWP, KAZE, POS‐SIFT and SIFT). The statistical results of the comparative experiment show that the HFAD can achieve the accuracy of matching within two pixels and confirm that the proposed algorithm is robust and effective.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12462
High‐resolution optical remote sensing image change detection based on dense connection and attention feature fusion network
Daifeng Peng; Chenchen Zhai; Yongjun Zhang; Haiyan Guan
<jats:title>Abstract</jats:title><jats:p>The detection of ground object changes from bi‐temporal images is of great significance for urban planning, land‐use/land‐cover monitoring and natural disaster assessment. To solve the limitation of incomplete change detection (CD) entities and inaccurate edges caused by the loss of detailed information, this paper proposes a network based on dense connections and attention feature fusion, namely Siamese NestedUNet with Attention Feature Fusion (SNAFF). First, multi‐level bi‐temporal features are extracted through a Siamese network. The dense connections between the sub‐nodes of the decoder are used to compensate for the missing location information as well as weakening the semantic differences between features. Then, the attention mechanism is introduced to combine global and local information to achieve feature fusion. Finally, a deep supervision strategy is used to suppress the problem of gradient vanishing and slow convergence speed. During the testing phase, the test time augmentation (TTA) strategy is adopted to further improve the CD performance. In order to verify the effectiveness of the proposed method, two datasets with different change types are used. The experimental results indicate that, compared with the comparison methods, the proposed SNAFF achieves the best quantitative results on both datasets, in which F1, IoU and OA in the LEVIR‐CD dataset are 91.47%, 84.28% and 99.13%, respectively, and the values in the CDD dataset are 96.91%, 94.01% and 99.27%, respectively. In addition, the qualitative results show that SNAFF can effectively retain the global and edge information of the detected entity, thus achieving the best visual performance.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12470
Linear target change detection from a single image based on three‐dimensional real scene
Yang Liu; Zheng Ji; Lingfeng Chen; Yuchen Liu
<jats:title>Abstract</jats:title><jats:p>Change detection is a critical component in the field of remote sensing, with significant implications for resource management and land monitoring. Currently, most conventional methods for remote sensing change detection often rely on qualitative monitoring, which usually requires data collection from the entire scene over multiple time periods. In this paper, we propose a method that can be computationally intensive and lacks reusability, especially when dealing with large datasets. We use a novel methodology that leverages the texture features and geometric structure information derived from three‐dimensional (3D) real scenes. By establishing a two‐dimensional (2D)–3D geometric relationship between a single observational image and the corresponding 3D scene, we can obtain more accurate positional information for the image. This relationship allows us to transfer the depth information from the 3D model to the observational image, thereby facilitating precise geometric change measurements for specific planar targets. Experimental results indicate that our approach enables millimetre‐level change detection of minuscule targets based on a single image. Compared with conventional methods, our technique offers enhanced efficiency and reusability, making it a valuable tool for the fine‐grained change detection of small targets based on 3D real scene.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12476
Building extraction from oblique photogrammetry point clouds based on PointNet ++ with attention mechanism
Hong Hu; Qing Tan; Ruihong Kang; Yanlan Wu; Hui Liu; Baoguo Wang
<jats:title>Abstract</jats:title><jats:p>Unmanned aircraft vehicles (UAVs) capture oblique point clouds in outdoor scenes that contain considerable building information. Building features extracted from images are affected by the viewing point, illumination, occlusion, noise and image conditions, which make building features difficult to extract. Currently, ground elevation changes can provide powerful aids for the extraction, and point cloud data can precisely reflect this information. Thus, oblique photogrammetry point clouds have significant research implications. Traditional building extraction methods involve the filtering and sorting of raw data to separate buildings, which cause the point clouds to lose spatial information and reduce the building extraction accuracy. Therefore, we develop an intelligent building extraction method based on deep learning that incorporates an attention mechanism module into the Samling and PointNet operations within the set abstraction layer of the PointNet++ network. To assess the efficacy of our approach, we train and extract buildings from a dataset created using UAV oblique point clouds from five regions in the city of Bengbu, China. Impressive performance metrics are achieved, including 95.7% intersection over union, 96.5% accuracy, 96.5% precision, 98.7% recall and 97.8% F1 score. And with the addition of attention mechanism, the overall training accuracy of the model is improved by about 3%. This method showcases potential for advancing the accuracy and efficiency of digital urbanization construction projects.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12475
Improvement of the spaceborne synthetic aperture radar stereo positioning accuracy without ground control points
Yu Wei; Ruishan Zhao; Qiang Fan; Jiguang Dai; Bing Zhang
<jats:title>Abstract</jats:title><jats:p>Compared with optical remote sensing satellites, the geometric positioning accuracy of synthetic aperture radar (SAR) satellite is not affected by satellite attitude or weather conditions. SAR satellites can achieve relatively high positioning accuracy without ground control points, which is particularly important in global surveying and mapping. However, the stereo positioning accuracy of SAR satellites is mainly affected by the SAR systematic delay and the atmospheric propagation delay of radar signals. An iterative compensation method for the SAR systematic time delay is proposed based on digital elevation model to improve the stereo positioning accuracy of SAR satellites without control points. In addition, to address the non‐real‐time updates of external reference atmospheric param, an iterative compensation method to estimate the atmospheric propagation delay of radar signals is proposed based on standard atmospheric models. In this study, SAR images from the Gaofen‐3 (GF‐3) satellite with 5 m resolutions were used as experimental data to verify the effectiveness of our proposed method. Simultaneously, the 2D positioning accuracy was better than 3 m and increased by 42.9%, and the elevation positioning accuracy was better than 3 m and increased by 90.2%.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12467
A 3D urban scene reconstruction enhancement approach based on adaptive viewpoint selection of panoramic videos
Xujie Zhang; Zhenbiao Hu; Qingwu Hu; Jun Zhao; Mingyao Ai; Pengcheng Zhao; Jiayuan Li; Xiaojie Zhou; Zongqiang Chen
<jats:title>Abstract</jats:title><jats:p>The widely used unmanned aerial vehicle oblique photogrammetry often suffers from information loss in complex urban environments, leading to geometric and textural defects in the resulting models. In this study, a close‐range panoramic optimal viewpoint selection assisted 3D urban scene reconstruction enhancement method is proposed for areas prone to defects. We first introduce the ground panoramic data acquisition equipment and strategy, which are different from those of the single‐lens supplementary photography method. Data acquisition is accomplished through a single and continuous surround‐style collection approach. The full coverage of the panoramic video in the space–time dimension enables the acquisition of texture details without considering camera station planning. Then, a panoramic multiview image generation approach is proposed. Adaptive viewpoint selection is achieved using unbiased sampling points from the rough scene model, and viewpoint optimisation is adopted to ensure sufficient image overlap and intersection effects, thus improving the scene reconstructability. Finally, the 3D model is generated by photogrammetric processing of the panoramic multiview images, resulting in an enhanced modelling effect. To validate the proposed method, we conducted experiments using real data from Qingdao, China. Both the qualitative and quantitative results demonstrate a significant improvement in the quality of geometric and textural reconstruction. The tie‐point reprojection errors are less than 1 pixel, and the registration accuracy with the model from oblique photogrammetry is comparable to that of optimised‐view photography. By eliminating the need for on‐site camera station planning or manual flight operations and effectively minimising the redundancy of panoramic videos, our approach significantly reduces the photography and computation costs associated with reconstruction enhancement. Thus, it presents a feasible technical solution for the generation of urban 3D fine models.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible
doi: 10.1111/phor.12474
Real‐time generation of spherical panoramic video using an omnidirectional multi‐camera system
Jiongli Gao; Jun Wu; Mingyi Huang; Gang Xu
<jats:title>Abstract</jats:title><jats:p>This paper presents a novel method for real‐time generation of seamless spherical panoramic videos from an omnidirectional multi‐camera system (OMS). Firstly, a multi‐view video alignment model called spherical projection constrained thin‐plate spline (SP‐TPS) was established and estimated using an approximately symmetrical seam‐line, maintaining the structure inconsistency around the seam‐line. Then, a look‐up table was designed to support real‐time processing of video re‐projection, video dodging and seam‐line updates. In the table, the overlapping areas in OMS multi‐view videos, the seam‐lines between spherical panoramas and OMS multi‐view videos and the pixel coordinate mapping relationship between spherical panoramas and OMS multi‐view videos were pre‐stored as a whole. Finally, a spherical panoramic video was outputted in real‐time through look‐up table computation under an ordinary GPU processor. The experiments were conducted on multi‐view video taken by “1 + 4” and “1 + 7” OMS, respectively. Experimental results demonstrate that compared with four state‐of‐the‐art methods reported in the literature and two bits of commercial software for video stitching, the proposed method excels in eliminating visual artefacts and demonstrates superior adaptability to scenes with varying depths of field. Assuming that OMS is not movable in the scene, this method can generate seamless spherical panoramic videos with a resolution of 8 K in real time, which is of great value to the surveillance field.</jats:p>
Palabras clave: Earth and Planetary Sciences (miscellaneous); Computers in Earth Sciences; Computer Science Applications; Engineering (miscellaneous).
Pp. No disponible