In this section you can browse a selected, but not exhausted, list of technology assets developed by GraphicsVision.AI members. Technology assets are software packages with a TRL equal or higher than 7, that can be made available on demand. Please contact us.

GMN partnersTechAcronymTechTitleTRLSynopsisKeywordsWebSiteURLContactPersonNameContactPersonEmailNotes
dummyDFKITracking for ARmodel-based tracking for AR application with Unity7The technology consists of a tracking module with an Unity API developed for augmented reality application. The processing pipeline includes: 3D scanning of an object, computation of a tracking model. The tracking model is then saved and used for recognizing and tracking robustly the object. The AR-application is done with Unity.Augmented Reality and trackinghttp://www.augmented-things.comAlain Pagani - DFKIAlain.Pagani@dfki.deCollaboration with Vicomtech already initiated
AR-Handbook_1DFKIAR-HandbookAugmented Reality Enhanced Handbooks9Digital manuals that are faded in directly into the field of view of the user via a head-mounted display are one of the most often used application examples for Augmented Reality (AR) scenarios. AR manuals can significantly simplify or accelerate maintenance, reparation or installation work on complex systems. Digital handbooks, presented as step-by-step instructions in a head-mounted display (HUD) directly in the user’s field of view, facilitate and accelerate the maintenance, repair, or installation of complex units. They explain precisely and clearly each individual step at the site, can be called up at anytime, reduce the safety risk to the employee, and they contribute to perfect results. DFKI’s Augmented Reality research department is working on simplifying the creation of these AR Handbooks through the integration of AI technologies with the aim of making them fit for actual operations. In the past, this so called “authoring” was generally performed manually and with the associated high costs. The system often required scripted descriptions of actions that had to be manually prepared; furthermore, expert knowledge of the tracking system in use and how to install tracking assistance was necessary.Augmented Reality, handbook, manual, interactivehttp://av.dfki.de/projects/ar-handbook/Nils Petersennils.petersen@dfki.de
dummyVICOMTECHSK-GEOMLIBDimensional inspection in inline production 9Is an innovative system capable of detecting Measuring complex surfaces in the forged inner rings next to the presses where they are produced. Validated in aplan that produces a workpiece of this type every 3 seconds (14 million/year).The technology reduces the costs, improves the efficiency of the entire process and dramatically reduce inspection time (from 17 minutes workpiece to 3,5 minutes). Since 2015, the solution has been fully operational and meeting the challenges posed, with a significant impact on the reduction of defective parts, a deeper knowledge of the process that in turn increases the efficiency and productivity of this area. Other deployments have been done on the top of this technologyinline inpection, 3d reconstructionhttp://www.youtube.com/watch?v=BUp5Ysdu74kGorka Marcostech.transfer@vicomtech.org
dummyVICOMTECHIGAMODigitization of arrangements in a die cutter with augmented reality9Automate the process of compensating for the pressure differences that industrial punching machines have. Implement a system of Augmented Reality to digitize the "correction plates" and project directly on the die-cutter virtual instructions necessary to make the adjustment of the die. With this technology has saved time and money in the manufacturing process. ar, industrial, punching machineshttp://vicomtech.box.com/s/t8crkgakijsrgcfx9sy65qx40a2wvcfjGorka Marcostech.transfer@vicomtech.org
dummyVICOMTECH3strAdvanced Management of 3D Warehouses from an ERP9Development of an advanced 3D environment for the management of a warehouse with the capture of data in real time from an ERP. The three-dimensional representation of the warehouse allows to see very clearly the current state of the warehouse.Complete development of a 3D editor capable of representing any warehouse layout. With correct references it is able to show in real time the current state of the warehouse graphically and by colors. The module gives clear visual information of current stock, product expiration, stock location, optimal pickin-route.3dweb, erphttp://vicomtech.box.com/s/nrr90i17hovj17pbt1edk9bpeteorzweGorka Marcostech.transfer@vicomtech.org
dummyVICOMTECHvidanomalyAnomaly event detection for video surveillance8Detection system of anomalous movements to reduce the number of alerts in CRA (Center of reception of alarms) Artificial intelligence from a video to detect movements that are not normal or usual. A system capable of analyzing the scene and thus reduce the number of false alarms and alleviate the work of the CRA. computer vision, surveillancehttp://www.vicomtech.orgGorka Marcostech.transfer@vicomtech.org
dummyVICOMTECHtranskitOnline audio transcription9Deep learning system to transcript audios in different languates. This technology has been validated in different scenarios and explotation sites. It can also automatically generate subtittlestranscriptión, seech2text, deep learninghttp://www.youtube.com/watch?v=SEuHwTf2Dgo&index=5&list=PLJlMCQn4ams_u54UhOnCBMk4r4J0hDiy4Gorka Marcostech.transfer@vicomtech.org
dummyCCGFDRMFace detection and recognition classification models on the wild7Developed and validated classification models for face detection and recognitionon on open source data (image and video sequences) that have demonstrated high accuracy and classification performance. Supported on this developed classification models, it is possible to identify human faces in images and video sequences, and using the detected face's bounding box to estimate the people gender and age. This tecnologies have been resulted from the development of the projects "AGATHA - Intelligent system for surveillance and crime control on open sources of information" and "UH4SP - Unified Hub for Smart Plants" respectively.Pattern Recognition; Machine (Deep) Learning; Computer Vision; Artificial Intelligencehttp://www.ccg.pt/projetos/agatha-sistema-inteligente-analise-fontes-informacao-abertas-vigilanciacontrolo-criminalidade/ www.uh4sp.com/en/ Miguel GuevaraMiguel.Guevara@ccg.pt
CCGAcousticAveAcousticAve - Auralization Engine7Auralization software (creation of 3D Sound) for room-acoustic modeling and audio spatialization. Currently capable of being used couple with visual virtualization platforms such as Blender, for generation of real-time 3D sound. Output in different audio formats (binaural, ambisonics, ...). Ideal for use in CAVE or HMD virtual environments.Sound 3D, Auralization, room-acoustics, Audiovisual ProductionCarlos SilvaCarlos.Silva@ccg.pt
CCGCDPCollaborative Design Platform7All the steps of the design collections process in a single and private space! This is the concept of the “collaborative design platform", which promotes the interaction and communication between the production company and its customers. The sharing of inspiration and trends, as well as the creation, approval and management of proposals, has never been so simple! In PPS5 – ICT4Business, CCG was the responsible partner for the development of the collaborative collection design platform. This platform integrates all stages of the creative process in designing fashion collections. It enhances and streamlines communication between the production company and its customers, through distance interaction between the designers of both sides, enabling the creation and management collection proposals in a single private room spacedesign collectionshttp://www.ccg.pt/projetos/pt21-design-colaborativo/?lang=enAna Limaana.lima@ccg.pt
HHIAutoPostDeformable Surface Tracking and Alpha Matting for the Automation of Post-production Workflows7The goal of AutoPost project is to automate major parts of the daily workload in audio-visual post-production, particularly for small and medium post houses and, with it, to make post-production more efficient by reducing time-consuming and costly manual processing. - Deformable tracking methods that estimate temporally consistent surface motion, deformation, and shading changes, even in presence of temporary occlusions under real-world conditions. - Matting methods that provide accurate and more realistic mattes for VFX and post-production processes with particular attention to motion blur and deformable surfaces under real-world conditions. - Software Development Kits (SDKs) for the tracking and matting algorithms. The SDK’s will be the basis for the development of the Plugin Suite and will come along with initial functional prototypes of tracking and matting plugins, ultimately intended for industry standard tools.automation, post-production, audiovisualhttp://autopost-project.eu/Prof. Dr.-Ing. Peter Eisertpeter.eisert@hhi.fraunhofer.detracking plugin
HHIOmniCam-360Scalable, mirror-based multi-camera system9The scalable, mirror-based multi-camera system OmniCam was developed at the Fraunhofer HHI. It allows the recording of live video in 360° panoramic format. The newest OmniCam-model, the OmniCam-360, consists of ten 36° mirror segments. Each mirror segment is equipped with one HD camera. The cameras of the OmniCam-360 are arranged vertically and in a circle and are therefore reciprocal to the cylindrically arranged mirror segments. Also, the cameras are placed around a virtual center. This arrangement allows parallax-free image stitching of scenes in the range between 1 meter and infinite. The covered field of view is about 60°. With a resolution of up to 10,000 x 2,000 pixels, OmniCam-360 generated videos are optimal for immersive applications. Since 2014, material generated by the OmniCam, can be processed in real-time. Supported by the Real Time Stitching Engine developed at the Fraunhofer HHI, the panoramic content can be displayed on tablets and VR-glasses in real-time. By enlarging the system, the OmniCam-360 can also generate 3D-video content. The crucial difference between the 3D OmniCam and its 2D version is the number of cameras. While the 2D version is equipped with a single camera per mirror segment, the 3D version is equipped with two cameras per segment. The 3D-rig of the OmniCam-360 features a total of ten 36° mirror segments with two cameras each. This amounts to a total of 20 micro HD cameras for 360° 3D panoramic recordings.?To achieve the 3D effect the lenses of each camera pair are arranged in distances between 40 and 70mm. Lens pair distances are adjustable within this range. Further, to achieve optimal 3D imaging, each lens pair should be distanced close to the average eye distance of 65 mm. As in 2D imaging, the vertical image section is also 60° in 3D panoramic imaging. The 3D OmniCam system allows parallax-free recording for distances larger than 2 meters. camera, 360, 3D, video, panoramichttp://www.hhi.fraunhofer.de/en/departments/vit/technologies-and- solutions/capture/panoramic-uhd-video/omnicam-360.htmlProf. Dr.-Ing. Peter Eisertpeter.eisert@hhi.fraunhofer.de
dummyCGAIIMS3DHBRMethod And System for 3D Human Body Reconstruction7Realistic modeling of the human body in 3D has many applications ranging from fashion, archaeology to the modern medicine field. This system enables easy, fast and high-resolution modeling solution of human body. "Easy" not only signifies easy means but also indicates that only few hardware required. With the help of this system and method, inexperienced user could complete the whole process effortlessly, and after scanning we could get the model by just pushing a button. More importantly, the model has high quality, with the texture and structure being clearly represented.3D reconstruction, human body, automationhttps://www.cgaii.com/#/indexZheng Kuanzhengkuan@4dage.com
dummyCGAIIM3DRPCMethod for 3D Reconstruction Based Upon Panorama Camera9This method presents a novel approach for 3D modeling, which with the help of panorama camera could have better performance on synchronized positioning and mapping, feature points matching, closed-loop detection and the like. Additionally, it is faster and more efficient compared with other means. And since this method cause high GPU occupancy and would increase power consumption greatly, we introduce mobile terminal to tackle this problem.3D reconstructionhttps://www.cgaii.com/#/indexZheng Kuanzhengkuan@4dage.com
CCGLSELocation and Sensing Engine7The Location and Sensing Engine is a platform that provides sensor integration in information systems via API. It provides indoor location of mobile devices (such as smartphones or tags) and allows the collection of other sensor data (vibration, noise, temperature, etc.) to be used for environmental monitoring, machines predictive maintenance or any other machine learning application. As it may not require any infrastructure installation or any special device, this technology offers a unique opportunity to monitor people and resources with high precision, easy deployment and low implementation costs.IoT/Indoor locationhttps://www.youtube.com/watch?v=F_jizkcoGAAJoão Moutinhojoao.moutinho@ccg.pt