Thursday, November 24, 2022

Timeline of the most dangerous viruses.

 Timeline of the most dangerous viruses

What is a computer virus?

A computer virus is a kind of malicious computer program, which when executed, replicates itself and inserts its own code. When the replication is done, this code infects the other files and program present on your system. These computer viruses are present in various types and each of them can infect a device in a different manner.

Downland the document 👈

Friday, November 18, 2022

Artificial intelligence

 History of Artificial Intelligence

Personal opinion:  Artificial intelligence has allowed us to evolve our species in different areas and areas, medical or even in everyday life.

What is Artificial Intelligence?

Artificial Intelligence (AI) or "artificial intelligence" is a branch of computer science that is developing machines that can think and act like humans. Such as voice recognition, problem-solving, learning, and planning. It is the intelligence displayed by machines as opposed to the natural intelligence displayed by humans and animals.
By this, there is a plan to make such a computer-controlled robot or software, which can think in the same way as the human mind thinks. To make Artificial Intelligence perfect in this, it is constantly being prepared. In its training, it is taught experience from machines, to adjust to new inputs, and to perform human-like tasks.
So overall, with the use of Artificial Intelligence, such machines are being created, which can act intelligently on the data received by interacting with their environment. That is, if the AI ​​​​concept becomes stronger in the future, then it will be like our friend. If you face any problem, what to do about it will tell you by thinking itself.
Artificial Intelligence (AI) is the ability of a machine or computer program to think and learn. This concept is based on the idea, that machines should be enabled to think about a problem like humans themselves, act on it, and learn from it.



Who is the father of Artificial Intelligence?

 The invention of artificial intelligence is merely a phrase. The concept of artificial intelligence has been present since the last century, but to practice it in a form that mimics human behavior, was not done until the late 2000s. John McCarthy is credited to be the founder of artificial intelligence.
His research on him was actually to use artificial intelligence in a way that the machine thinks and acts like humans, providing a different and changing environment. His contributions from him in this field are worth mentioning as he took a whole new perspective in artificial intelligence, mainly focusing on learning algorithms and the computational capacity of the learning algorithms. This helped in increasing the research in developing powerful algorithms for learning the environment.

Year in which was called artificial intelligence.

Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field. At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented.


Who created First Artificial Intelligence?

A lot of development has been made in artificial intelligence throughout time. New systems emerged from the early 1900s, but it was not until 1929 that a major system came up from Japan which created waves around the world. Japanese biologist and a professor named Makoto Nishimura created a robot Gakutensoku. It was said to be the first robot which implied that it's artificially driven mind could grasp knowledge from people and surroundings and respond accordingly.

What are the Components of AI?

1.-Learning: one of the essential components of ai, learning for AI includes the trial-and-error method. The solution keeps on solving problems until it comes across the right results. This way, the program keeps a note of all the moves that gave positive results and stores it in its database to use the next time the computer is given the same problem. The learning component of AI includes memorizing individual items like different solutions to problems, vocabulary, foreign languages, etc., also known as rotate learning. This learning method is later implemented using the generalization method.

2.-Reasoning: the art of reasoning was something that was only limited to humans until five decades ago. The ability to differentiate makes Reasoning one of the essential components of artificial intelligence. For reason is to allow the platform to draw inferences that fit with the situation provided. Furthermore, these inferences are also categorized as either inductive or deductive. The difference is that in an inferential case, the solution of a problem provides guarantees of conclusion. In contrast, in the inductive case, the accident is always a result of instrument failure.

3.-Problem solving: in its general form, the AI's problem-solving ability comprises data, where the solution needs to find x. AI witnesses a considerable variety of problems being addressed on the platform. The different methods of ‘Problem-solving’ count for essential artificial intelligence components that divide the queries into special and general purposes. In the situation of a special-purpose method, the solution to a given problem is tailor-made, often exploiting some of the specific features provided in the case where a suggested problem is embedded. On the other hand, a general-purpose method implies a wide variety of vivid issues.

4.-Perception: in using the 'perception' component of Artificial Intelligence, the element scans any given environment by using different sense-organs, either artificial or real. Furthermore, the processes are maintained internally and allow the perceiver to analyze other scenes in suggested objects and understand their relationship and features. This analysis is often complicated as one, and similar items might pose considerable amounts of different appearances on different occasions, depending on the view of the suggested angle.

5.-Language-understanding: in simpler terms, language can be defined as a set of different system signs that justify their means using convention. Occurring as one of the widely used artificial intelligence components, language understanding uses distinctive types of language over different forms of natural meaning, exemplified overstatements. One of the essential characteristics of languages ​​is humans' English, allowing us to differentiate between different objects. Similarly, AI is developed in a manner that it can easily understand the most commonly used human language, English. This way, the platform allows the computers to understand the different computer programs executed over them easily.


Features.

1.-Feature Engineering: Feature extraction is the process of identifying a proper nominal set of attributes or features from the given dataset of information. The performance highly depends on choosing the correct set of features instead of the wrong ones.

2.-Artificial Neural Networks: Artificial Neural Networks (ANN) also known as neural networks (NNs) is based on the collection of connected nodes known as artificial neurons just like human brain cells. Each connection transmits a signal from one neuron to another neuron after processing it. With the help of some nonlinear function, the output of each neuron generates a real number for a signal at a connection. The connections are also called edges. Neurons are aggregated in different layers for different transformations with the help of algorithms. Signals usually travel from the first layer to the last layer multiple times. There are two types of networks, one is a feedforward neural network, also known as acyclic in which signal travels only from one direction to another. Some common ones are perceptrons, multi-layer perceptrons and radial basis networks. The second type is a recurrent neural network which allows opinion and small memories of previous input events.

3.-Deep Learning: The modern world is stuffed with a lot of data and with the help of deep learning, the digital world is transforming into a beautiful place. It is a machine learning technique that automates computers to think just like humans. The architecture of this technique includes multiple hidden layers between the input and output layers as compared to Artificial Neural Networks. In the deep learning framework, it performs automatic features after extraction along with classification learning. It has significantly improved the performance of many programs such as computer vision, image classification, speech recognition and others. Despite complex architecture or numerous hidden layers, the performance of the model can be improved with high-performance parallel-computing GPUs.

4.-Natural language processing: Natural language processing is a subfield of linguistics, artificial intelligence, and computer science. It enables computers to understand human language in the form of text or spoken words (voice data) and understand it just like human beings. Whether the language is either spoken or written, NLP uses Artificial Intelligence to take it as input, process it and translate it in a way that the computer understands. Just like humans have ears to hear and eyes to see, computers take help of programs to read and microphones for audio. And just like human beings' process input with the brain, computers process it with programs and algorithms for respective inputs. And in the end, the input is converted into the form of code that the computer can understand.

5.-Intelligent Robotics: Robotics is the intersection of engineering, science and technology that produces programmable machines known as robots that can assist people or mimic human actions. Robots were originally built to handle monotonous tasks, but now it has expanded to perform domestically, commercially, and militarily. Each robot developed these days has a different level of autonomy to carry out tasks without any external influence, ranging from human-controlled bots to fully autonomous bots.

6.-Perception: Machine perception helps in taking inputs from the sensors (like cameras, wireless signals, and microphones), process it and deduce all aspects of it. Mainly, it is used in applications such as speech recognition, facial recognition, or object recognition. Computer vision is the one source that provides visual input analysis.
For facial recognition, Artificial Intelligence technology has helped audiences to recognize individual faces using biometric mapping. This path breaking technology compares the knowledge against the database of faces to find a match of it. Usually, this feature is used to authenticate employees or users from ID verification services for work or mobile phones. It works by pinpointing and calculating facial features from a saved image.

7.-Automate Simple and Repetitive Tasks: AI has an amazing ability to effectively handle monotonous tasks repeatedly without getting tired. To better understand in detail, let's take an example of SIRI, a voice enabled Virtual Personal Assistants created by Apple. As the name defines, it acts as an assistant and can handle multiple commands in a single day. Right from creating notes for a brief, rescheduling calendar for a specific meeting, to guiding users on the way with the help of navigation, SIRI covers it all. Earlier, these activities were supposed to be done manually which takes quite a lot of time and effort, but with voice-enabled assistants, you just need to speak, and it will get it done in a fraction of a second providing a safer work environment and increased efficiency. Other examples are Amazon Echo, Cortana and Google Nest. 

Thursday, November 17, 2022

MICROSOFT WINDOWS timeline.

 MICROSOFT WINDOWS 

TIMELINE

In 1975, Gates and Allen formed a partnership called Microsoft. Like most start-ups, Microsoft began small but had a huge vision—a computer on every desktop and in every home. During the next years, Microsoft began to change the ways we work. In June 1980, Gates and Allen hired Gates' former Harvard classmate Steve Ballmer to help run the company.

MS-DOS (1981)

Windows 1.0 required a minimum of 256 kilobytes (KB), two double-sided floppy disk drives, and a graphics adapter card. A hard disk and 512 KB memory was recommended for running multiple programs or when using DOS 3.0 or higher. It was originally developed by Microsoft for IBM-compatible personal computers. Although the first version of OS from Microsoft, MS-DOS was a little-used or preferred alternative to Apple's Macintosh. Despite witnessing little success, Microsoft continued to offer support for MS-DOS until the development of Windows XP.

MS DOS
Windows 1.0–2.0 (1985-1992)

Instead of typing MS-DOS commands, Windows 1.0 allowed users to point and click to access the windows.

In 1987 Microsoft released Windows 2.0, which was designed for the Intel 286 processor. This version added desktop icons, keyboard shortcuts, and improved graphics support.



Windows 3.0–3.1 (1990–1994)

Microsoft released Windows 3.0 in May 1900 offering better icons, performance and advanced graphics with 16 colors designed for Intel 386 processors. Its popularity grew by manifolds following the release of SDK that helped software developers focus more on writing and less on writing device drivers. With Windows 3.0 Microsoft completely rewrote the application development environment. The OS included Program Manager, File Manager, Print Manager and games, remember Solitaire, a complete timewaster.



Windows 95 (August 1995)

A major release of the Microsoft Windows operating system that caused Apple's Market share to decline or shrink was Windows 95. Windows 95 as the name suggests was released in 1995 represented a significant advance over its precursor, Windows 3.1. By the way, this was also the time when the first version of Microsoft's proprietary browser – Internet Explorer 1 was rolled out in August 1995 to catch up the Internet wave.


Windows NT 3.1 – 4.0 (1993-1996)

A version of the Windows OS with 32-bit support for preemptive multitasking. Two versions of Windows NT:
1.Windows NT Server-Designed to act as a server in networks
2.Windows NT-Workstation for stand-alone or client workstations.



Windows 98 (June 1998)

Described as an operating system that “Works Better & Plays Better, ‘Windows 98’ offered support for a number of new technologies, including FAT32, AGP, MMX, USB, DVD, and ACPI. Also, it was the first OS to include a tool called Windows Update. The tool alerted the customers when software updates became available for their computers.



Windows 2000 (February 2000)

W2K (abbreviated form) was an operating system for business desktop and laptop systems to run software applications, connect to Internet and intranet sites, and access files, printers, and network resources. Windows 2000 4 versions released by Microsoft.
1.Professional (for business desktop and laptop systems)
2.Server (both a Web server and an office server)
3.Advanced Server (for line-of-business applications)
4.Datacenter Server (for high-traffic computer networks)



Windows ME – Millennium Edition (September 2000)

The Windows Millennium Edition, referred to as “Windows Me” was an update to the Windows 98 core that included some features of the Windows 2000 operating system. The version had the “boot in DOS” option removed but included other enhancements like Windows Media player and Movie Maker for basic video editing.



Windows XP (October 2001)

This version of the OS was built on Windows 2000 Kernel and was introduced in 2001 along with a redesigned look and feel. It was made available to the public in 2 versions:
1.Windows XP Home.
2.Windows XP Professional.

Microsoft focused on mobility for both editions, including plug and play features for connecting to wireless networks was introduced in this version of Windows, and it proved to one of Microsoft's best-selling products. Its use started declining with more Windows 7 deployments.



Windows 7 (October 2009)

Windows 7 made its official debut on October 22, 2009. The OS included enhancements in the form of fast start-up time, Aero Snap, Aero Shake, support for virtual hard disks, a new and improved Windows Media Center, and better security features.


Windows 8 (2013)

Bill Gates' vision of future computing was Touch and voice replacing mouse and keyboard. We already have the touch with Windows 8, a completely redesigned OS built from the ground up.


The OS replaces the more traditional Microsoft Windows OS look and feel with a new “Modern Interface” consisting of flat tiles that first debuted in the Windows Phone 7 mobile operating system.

Windows 8.1 (2013)

Windows 8.1 changed a few things for the better which were found wanting in Windows 8.


Notable changes included a visible Start button, improved Start screen, Internet Explorer 11, tighter OneDrive integration, Bing-powered unified search box, the ability to land on the desktop on login instead of the Start screen.

Windows 10 (2014)

Windows 10 has been described as the 'last operating system' from Microsoft. It is now a series of releases that receives half-yearly feature updates. They are referred to as Windows 10 v1501, Windows 10 1803 and so on.


The OS introduced Edge a new browser meant to replace Internet Explorer. It supports Universal Apps which Universal apps can be designed to run across multiple Microsoft product families like PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Mixed Reality. It has been well received – but its Automatic Windows Update system is one area that is disliked by some.

Windows 11 (2021)

Windows 11, released in 2021, has all the features, power, and security of Windows 10. The primary difference appears to be a redesigned desktop and the Settings menu. But apart from this, there are several other new features under the hood.


Wednesday, November 16, 2022

OPERATING SYSTEMS

OPERATING SYSTEM

TIMELINE


1954: MIT's Tape director operating system made of for UNIVAC 1103

 

1956: The GM-NAAI/O input/output system of General Motors and North American Aviation was the first operating system for the IBM 704 computer.


1960: IBSYS was the tape-based operating system that IBM supplied with its IBM 7090 computer. It was really a basic monitor program, that read control card images and data cards of individual jobs.



1964: The Berkely Timesharing System was a pioneering time-sharing operating system implemented between 1964 and 1967 at the University of California Berkely.



1966-DOS: Disk Operating System/360, also DOS/360, or simply DOS, was an operating system for IBM mainframes. It was first delivered by IBM in June 1966. In its time DOS was the most widely used operating system in the world. 
Initial releases of DOS could run only one program at a time. Later version of "real" DOS were able to run up to three programs concurrentrly. 


 1969-UNIX: Unix is a multitasking, multi-user computer operating system that exists in many variants. The original Unix was developed at AT&T's Bell Labs research center by Ken Thompson. Dennis Richie. From the user's perspective, Unix system are characterized by a modular design that is sometimes called the "Unix philosophy," meaning the OS provides a set of simple tools that each perform a limited, well-defined function with a unified filesystem and a shell scripting and command language to perform complex workflows.


1981-IBM PC DOS: IBM PC DOS (full name: IBM Personal Computer Disk Operating system) was an operating system for the IBM Personal Computer, manufactured and sold by IBM from the 1980s to the 2000s.



1984-MAC OS: In 1984, Apple Computer Inc. introduced the Macintosh personal computer, with the Macintosh 128k model, which came bundled with what was later renamed to Mac OS. After hearing about the pioneering GUI technology being developed at Xerox PARC from former Xerox employees, jobs negotiated a visit to see the Xerox Alto computer and Smalltalk development tools in exchange for Apple stock options. The Macintosh operating system used concepts form the Xerox Alto, but many elements of the graphical user interface were created by Apple including the menu bar, pop-up menus, and the concepts of drag and drop and direct manipulation.


1985-Windows 1.0: Microsoft Windows 1.01 retails, at a list price of $99. It's marketed as a graphical user interface that extends the DOS operating system and lets users run several programs at the same time freely switch among them. But it's not touted as an actual operating system until a decade later.


 
1991-Linux: Norse OS god Linus Torvalds releases an open-sources, Unix-like OS kernel that sort of bears his name. Linux is officially pronounced "leen-ooks" to reflect its Finnish origins. The Linux kernel will subsequently be combined with GNU software to create an array of open-source operating systems known as Linux distributions. 



1995-Windows 95: Windows 95 appears, to great fanfare. It spawns a new line of Microsoft operating system with one foot in the 32-bit world and another stuck in the mud with not-yet-obsolete 16-bit software.


1996-Windows NT: Windows NT 4.0 is a preemptive, graphical operating system. designed to work with either uni-processor or symmetric multi-processor computers. It was a part of Microsoft's Windows NT line of operating system.
It is a 32-bit Windows system available in both workstation and server editions with a graphical environment similar to that of Windows 95.


2000-Red Hat Linux: Red Hat Linux, assembled by the company Red Hat, was a popular Linux based operating system. It was the first Linux distribution to use the RPM Package Manager. For a system administration performing software installation and maintenance, the use of package management rather than manual building has advantages such as simplicity, consistency and the ability for these processes to be automated and non-interactive.  


2007-IOS: IOS (previously iPhone OS) is a mobile operating system developed by Apple Inc. and distributed exclusively for Apple hardware. It is the operating system that powers many of the company's iDevices, including the iPad, iPod Touch and Apple Tv.


2007-Android: Android is a mobile operating system (OS) based on the Linux kernel. developed bu Google. With a user interface based on direct manipulation, Android is designed primarily for touchscreen mobile devices such as smartphones and tablet computers.



2011-Mac OS X Lion: It is the eighth major release of Mac OS X, Apple's desktop and server operating system for Macintosh computers. There are Over 250 new features including: Address Book uses an iPad-like user interface, Auto-Save. Air Drop-direct file sharing via WI-Fi Direct. Face Time, Apple push Notifications Services, etc.


2012-Windows 8: Windows 8 is a personal computer operating system developed by Microsoft as part of Windows NT family of operating system. Windows 8 introduced major changes to the operating system's platform and user interface to improve its user experience on tablets.
Windows 8 also adds native support for USB 3.0 devices, which allow for faster data transfers and improved power management with compatible devices.



 




document download here