Lesson Multimedia Concept and Topics (PDF). Lesson Audio Fundamentals (PDF). Lesson Audio Coding and Standard (PDF). Lesson . introduction to multimedia notes - Free download as PDF File .pdf), Text File .txt) or read online for free. caite.info - Download as PDF File .pdf), Text File .txt) or read online.
|Language:||English, Spanish, German|
|ePub File Size:||24.65 MB|
|PDF File Size:||11.15 MB|
|Distribution:||Free* [*Regsitration Required]|
Multimedia is a rich medium that accommodates numerous instructional strategies. Multimedia addresses many of the challenges of instruction in both the . Up: Multimedia Module No: CM BSc Multimedia (CM) PDF Lecture Notes and Slides. PDF Lecture Notes. Slide Media. If you wish to have the videos . Multimedia is any combination of text, graphic art, sound, animation and video .. notes, sequences of notes, and what the instrument will play these notes.
Today, there are plenty of new media technologies being used to create the complete multimedia experience. Multimedia kiosks are often used in public places as information providers. A data format: Auxiliary controller are available to give more control over the note played on the keyboard. Difference between animation and video Animation is commonly used in those instances where video graphy is not possible.
System message go to all devices in a MIDI system because no channel number are specified. System Real-time message ii. The panel controls include a slider that sets the overall volume of synthesizer a button that turns the synthesizer on.
Sequencer transforms the note into MIDI message. They describe music by defining pitch. System Common Message iii.
Channel Voice Message ii. A sequencer may be computer. Control Panel: It controls those functions that are not directly concerned with notes and duration.
Omni Off. Keyboard provides the musician direct control of the synthesizer. It is an electronic device in cooperating with both hardware and software. Channel Message: It go only to specified devices. These messages synchronize the 8. Synthesizer memory is used to store patches for the sound generators and setting of the control panel.
Channel mode message includes Omni On. It concern with notes and duration.
MIDI Message: These maxima called format. SMPTE format consists of hour: MIDI Software: The software application generally falls into four major categories. Using MIDI clock a receiver can synchronize with the clock cycle of sender. Therefore we can consider these signals as quasi-stationary signals for around 30ms 2. System Common Message: System common message are commands that prepares sequencers and synthesizer to play song.
The human brain can recognize the very fine line between speech and noise. System Exclusive message: Music recording and performance application 2.
Speech signals have two properties which can be used in speech processing. The SMPTE timing standard was originally developed by NASA as a way to make incoming data from different tracking stations so that the receiving computer could fill what time each piece of data was created.
The spectrum of the audio signals shows characteristics maxima. SMPTE uses a 24 hour clock from 0 to 23 before recycling. Song selected. Voice speech signals show during certain time intervals almost periodic behavior. Synthesizer Patch editor and library patch 4. To avoid delay these message are sent in the middle of other message if necessary. Musical notation and printing application 3. Generated speech most be understandable and most sound natural.
Time-dependent sound concatenation: Types of Speech: The important requirement of speech generation is the generation of the real time signals. Speech generation: Speech output system could transfer text into speech automatically without any lengthy processing. The easiest method for speech generation is to use pre-coded speech and play it back in the timely fashion. Word sound concatenation Speech generation can also be performed by sound concatenation in a timely fashion.
In the above figure. To solve these problems. In the simplest case. Individual speech unit are composed like building blocks.
The above given figure word sound concatenation and syllabus sound shows the syllabus sound of word CRUM. In the above given example individual phone of the word CRUM are shown. It is possible with just a few phones to create from an unlimited vocabulary. This problem is called co articulation which is mutual sound effect. Co articulation Problem The transition between individual sound units create an essential problem called co articulation which is the mutual sound influence throughout the several sound.
Di-phone of word CRUM is shown. Two phones can constitute a Di-phone. This leads towards synthesize the speech sequence. The best pronunciation of word is achieved is storage of whole word. The phone sound concatenation shows the problem during the transition between individual phones. To make the transition problem easier. The speech is generated through set of syllabus. The transition problem is not solved sufficiently on this level. Di-phone sound concatenation is used. In the second step the sound script is translated into a speech signals.
This characteristics value is filters middle frequency and their bandwidth. A dictionary of exception stored in the library. Formants synthesize simulate the vocal track through filter. Using speech synthesize a text can be transformed into acoustics signals. Most transcription methods work with later to phone rules. The typical component of this system is given in the figure below: Formants are the frequency maxima in the spectrum of the speech.
Besides the problem of co-articulation ambiguous pronunciation most be considered. The method used for sound synthesize in order to simulate human speech is called the linear predictive coding LPC method. Frequency dependent sound concatenation: Speech generation can also be based on frequency dependent sound concatenation. Speech analysis: So speech analysis can server to analyze who is speaking.
To recognize and understand the speech itself. The computer identifies and verifies the speaker using an acoustic finger print is digitally stored. It is digitally stored speech probe certain statement of the speaker.
Word boundaries must be determined. Specific problem is presented by Room Acoustic with existed environment noise. Here the specific speech characteristics are used for data rate reduction.
The third step deals with semantics of the previously recognized language. The same word can be spoken quickly or slowly. Very often neighboring words follows into one another.
Channel Vo-Coder is an example of paramterized system. In this step the errors in the previous step can be recognized. In the first step. There are still many problems into speech recognition and understanding research. Parameterized systems works with source coding algorithms. In the second step. For the comparison of the speech elements to the existing pattern.
Speech Recognition System and Understanding: The system which provides the recognition and understanding of speech signal applies this principle several times as follows: Signal Form Coding This coding considers no speech-specific properties and parameters. Here the goal is to achieve most efficient coding of the Audio signal. An acoustic and phonetic analysis is performed. Here the decision error of previous step can be recognized and corrected.
It can be real or virtual. An image can be thought as a function with resulting values of the light intensity at each point over a planar region. For digital computer operations. Black and white. Captured image format Stored image format Captured Image format: The image format is specified by the two main parameters: Both parameter values depend on hardware and software for input and output of images.
If there are two Intensity Values. Image Format: There are different kinds of image format. If I is 2-D matrix. If 8-bit integers are used to store each pixel value. A digital image is represented by a matrix of numeric values each representing a quanitized intensity value. The Color can be Encoded witg 1-bit Binary image format. Intensity at each pixel is represented by an integer and is determined from the continuous image around pixel location.
For bitmap this value is binary digit. Some of the image formats are: Color can be encoded with one bit binary image. The lines. When we store an image we are storing a two dimensional array values in which each value represent the data associated with pixel in image. Eight bit for color. JPG joint photographic group. Post script etc.
Graphics primitives and their attributes represent a higher level of an image representation. The advantage of the higher level primitives is the reduction of data to store per graphical image. A Bitmap is an array of pixel values that map one by one to pixels on the screen.. The dis-advantage is the additional conversion step from the graphical primitives and their attributes to its pixel representation. Fundamental step of digital image processing: I [x][y] is the intensity value at the position corresponding to the row x and column y of the matrix.
Digital Image processing: The field of digital image processing refers to the processing of digital image by means of digital computer. I[x] [y] When I is the two dimensional matrix then.
Image Analysis: Image analysis is concerned with techniques for extracting description from the image shape. No mathematical model or probabilistic models are used.
Output of this process is image. Morphological process: The idea behind the enhancement technique is to bring out details that is hidden or not clearly seen or highlight the certain feature of the interest of the image. Segmentation is most difficult task in the digital image processing. It is human subjective area of image processing. Generally image acquisition stage involves pre-processing such as scale. The more accurate Input and output of this step is an image. Image compression is familiar to the most of the user of computer in the form of the file extension such as jpg extension.
Image compression: Image compression deals with the technology or technique for reducing the storage required to save the image or the bandwidth required to transmit it. Increase the contrast of an image. Image Enhancement: It is the simplest area of digital image processing. Output of this process is generally image.
Rough segmentation process brings the process long way towards the successful solution. Is the process of partitioning of an image into its component or object. Image Restoration: Image restoration is the area that also deals with improving the appearance of the object image how ever enhancement.
Image acquisition is the first process in the digital image processing.
Video signal representation includes three aspects: Description is also called feature selection. The output of this process is also attributes. Choosing representation is only part of solution for transforming raw data into a form suitable for subsequent computer processing.
When focus is on internal property complete region is suitable. Representation and description Labeling: It almost always follow the output of segmentation state. It is the process of classify the object according to the attributes extracted.
The decision that must be made is weather the data should be represented as boundary or complete region. Recognition matching: Recognition is the process that assigns a label in object based on its descriptions.
When focus is in external shape boundary is suitable. Vertical detail and viewing distance: Important measures include: To meet this.
Luminance and Chrominance Color vision is achieved through three signals. To represent visual reality. Green and Blue light RGB in each portion of the screen.
Temporal aspects of illumination Lighting: In contrast to continuous pressure waves of an acoustic signal. Continuity of Motion: Movies use 24 frames per second and have a jerkiness especially when large objects are moving fast and close to the viewer. YUV Signal: Since human perception is more sensitive to brightness than any chrominance information.
Approaches to Color Encoding: RGB Signal: Consists of separate signals for red. Other colors can be coded as combination of these primary colors. Video Transmission.
YIQ signal: The component division for YUV signal is: Then quantized samples are converted to bit streams. Computer Video Formats The output of the digitalized motion video depends on the display device.
For presentation of different colors on the screen. The computer video format depends on the input and output devices for the motion video medium. Computer Video Controller Standards are: Here the important task is Constant refresh of the display. The commonly used device is raster displays.
The video controller displays the image stored in the frame buffer. The monitor is controlled through an RGB output. The storage capacity per image is then With the resolution of X pixles. The storage capacity per image is therefore: Color TV Systems Conventional television systems: The conventional television systems include: It is the oldest and most widely used television standard. The storage capacity per image is then 2. The color carrier is used with approx.
Phase Alternating Line was developed by Telefunken in It Uses a quadrature amplitude modulation. Is used in Western Europe. Uses quadrature amplitude modulation. High Definition TV In They might include time-varying positions Motion Dynamics. An animation covers all changes that have a visual effect. Basic concepts Producing an Animation: Values of variables can be used as parameters for animation routines..
SCEne Format Scefo specification can be considered a superset of linear sets including groups and object hierarchies as well as transformation abstractions using high-level languages constructs. Each event is described by a beginning frame number.
Methods of Controlling Animations: Constraint-Based Systems: Controlling animation is independent of the language used for describing it.. It can use different techniques. Procedural Control: This can be done at the — object level by specifying simple transformations translations. Kinematics and Dynamics: Sketchpad and ThingLab. The cube has a mass of grams. The force of gravity acts on the cube. Data Compression On compressed graphics. The same is for multimedia communication. To provide visible and cost effective solution.
Entropy Coding: The data transfer of uncompressed video data over digital network requires very high bandwidth to be provided for a single point to point communication.
Coding requirement: Source coding: Entropy coding is an example of the lossless encoding as the decompression process regenerates the data completely.
It compresses a sequence of digital stream without loss. JPEG coding. Hybrid Coding: It is the compression technique. The degree of compression that can be reached by source coding depend on the data contain. PLV Entropy Coding: It is used regardless of media specific characteristics.: The data streams are similar but not identical.
Quantization Process: Quantization process the result of the previous step. Entropy coding is usually last step of data compression. In the case of source coding a one way relation between the original data stream and encoded data streams exists. It specifies the mapping of the real number into integer.
Major steps of data compression: Major steps of data compression 1. The processing and quantization cab be repeated iteratively several times in feedback loop. Vector Quantization 3. For e. The data stream to be compressed is considered to be a simple digital sequence and the semantics of the data is ignored. Source coding take into an account the semantics of the data.
Quantization is performed using the different number of bits per coefficient. Picture processing: Processing is the actually the first step of compression process which makes use of sophisticated algorithm of transformation from time to frequency domain. Hybrid coding. Picture preparation: Huffman coding. MPEG coding. That is the well known algorithm and transformation technique. Every character has associated weight equal to number of times the character occurs in a data stream.
It converts byte between four and to into three byte. Diatomic encoding the variation of run length coding based on combination of two bytes.
The term spatial domain time domain refer to the image plane itself and approaches in this category are based on discrete manipulation of pixel in an image. To determine Huffman code. Some basic compression Techniques: Runlength Coding Sample image. Uncompressed data: The Huffman Coding Algorithm determines the optimal code using the minimum number of bits. The leaves nodes of the tree represent the characters that are to be encoded. The compressed data contain these byte followed by the special flag and number of its occurrence.
The length number of bits of the coded character will be differing. Sequence of three to a maximum bytes can be reduced to 2 bytes. Which is indicated by special flag called? Which does not occur as part of data stream?
If a byte occur at least four consecutive times. Starting with a sequence of three blanks. The blank in the text such as symbol or pair of blanks are ignored.
Huffman Coding: Huffman coding is one type of entropy coding where given character most be encoded together with the probability of their occurrence. Every nodes contains the occurrence of probability 0 and 1 are assigned to the branches of the tree. This technique determines most frequently occurring pair of bytes. Frequency domain processing technique is based on modifying the Fourier transform of an image. In practice the average compression rate achieved by arithmetic and Hoffman coding are similar.
Arithmetic Coding: Arithmetic coding is one type of the Entropy coding between the code symbol and code word doesn't exist because it doesn't encode each symbol separately. Random access is not available. Each symbol is instead coded by considering the period data.
Therefore coded data stream most always be read form beginning. Transformation Encoding: Actions Shares. Embeds 0 No embeds. No notes for slide. Multimedia notes 1. What is Multimedia? Multimedia means that computer information can be represented through audio, video, and animation in addition to traditional media i. Multimedia is the field concerned with the computer-controlled integration of text, graphics, drawings, still and moving images Video , animation, audio, and any other media where every type of information can be represented, stored, transmitted and processed digitally.
Multimedia Application is an Application which uses a collection of multiple media sources e. Hypermedia can be considered as one of the multimedia applications. Hypertext is a text which contains links to other texts. Hypermedia is not constrained to be text-based. It can include other media, e.
Applications Examples of Multimedia Applications include: Current big applications areas in Multimedia include: World Wide Web: Hypermedia systems -- embrace nearly all multimedia technologies and application areas.
Every increasing popularity. Multicast Backbone: Equivalent of conventional TV and Radio on the Internet. They are: Discrete media DM, static: Continuous media CM, dynamic: This point might be the optical center. To ensure that users view content in a more structured way: The designers of the Colgate web site used white space between media elements. The white space in this example includes the blue area around the products. Fewer colors create a cleaner, more tasteful look.
It should give users direct control over the product. Therefore, it is important to load the media elements quickly. You just clipped your first slide!