#CineScale: Recognising Cinematic Features with AI

Movie features Projects and Databases

ShotScale: Very Long Sot | Long Shot
Camera Level: Shoulder | Hip
Camera Angle: Neutral | Low

~1M frames from >100 movies

In JPEG format extracted @1fps

Shot scale, Camera Angle, Camera Level annotation set

From 2+1 human annotators

AI-driven features extraction

A CNN model for recognition

What is the aim?

Our project aims to delve into the world of movies and leverage the power of artificial intelligence to recognize and analyze various cinematic features. Through extensive research and development, we have curated many extensive database consisting of over one million frames from more than 100 movies, all extracted in JPEG format at a rate of one frame per second.

The heart of our project lies in the extraction of cinematic features using cutting-edge artificial intelligence techniques. We have developed a Convolutional Neural Network (CNN) model specifically designed for the recognition of these features. Our AI-driven features extraction models serve as a powerful tool for filmmakers, researchers, and cinema enthusiasts alike. It enables a deeper understanding of the visual language employed in movies, allowing for precise analysis and comparison of different films. With CineScale, you can explore the technical aspects of movies, uncover hidden patterns, and gain valuable insights into the art of filmmaking.

People

The research activities are conducted by joint project team belonging to the Department of Information Engineering (DII) of the University of Brescia (Italy) and the Department of Film Studies at the ELTE University of Budapest (Hungary).
Sergio Benini

Sergio Benini

DII UniBS

Mattia Savardi

Mattia Savardi

DII UniBS

Andras Bálint Kovács

Andras Bálint Kovács

ELTE Budapest