What programming language is word written in

This is my first question so be nice lol…

Think of it this way. Python is written in C, which is written in an older C compiler, which is written in an even older C compiler, which is written in B, which is written in (I think) BCPL. I am not sure what BCPL is written in, but it seems that there must be an original language somewhere?

In other words, every programming language is written in an older programming language. So what came first, and what was that coded in?

asked Oct 14, 2020 at 23:31

fartgeek's user avatar

3

What are programming languages written in?

Programming language compilers and runtimes are written in programming languages — not necessarily languages that are older or are different than the one they take as input.  Some of the runtime code will drop into assembly to access certain hardware instructions or code sequences not easily obtained through the compiler.


Once bootstrapped, programming languages can self-host, so they are often written in the same language they compile.  For example, C compilers are written in C or C++ and C#’s Roselyn compiler is written in C#.

When the Roselyn compiler adds a new language feature, they won’t use it in the source code for the compiler until it is debugged and working (e.g. released).  This akin to the bootstrapping exercise (limited to a new feature rather than the whole language).

But to be clear, there is the potential (and often realized) for the programming language to be written in the latest version of its input language.


So what came first, and what was that coded in?

Machine code came first, and the first assemblers were themselves very very simple (early assembly languages were very easy to parse and generate machine code for), they were written in machine code, until bootstrapped and self-hosted.

answered Oct 15, 2020 at 1:07

Erik Eidt's user avatar

Erik EidtErik Eidt

32.9k5 gold badges55 silver badges90 bronze badges

Think of it this way. Python is written in C,

No, it is not.

You seem to be confusing a Programming Language like Python or C with a Programming Language Implementation (e.g. a Compiler or Interpreter) like PyPy or Clang.

A Programming Language is a set of semantic and syntactic rules and restrictions. It is just an idea. A piece of paper. It isn’t «written in» anything (in the sense that e.g. Linux is «written in» C). At most, we can say it is written in English, or more precisely, in a specific jargon of English, a semi-format subset of English extended with logic notation.

Different specifications are written in different styles, here is an example of some specifications:

  • The Java Language Specification
  • The Scala Language Specification
  • The Haskell 2010 Language Report
  • The Revised7 Report on the Algorithmic Language Scheme
  • The ECMA-262 ECMAScript® Language Specification
  • Python does not really have a single Language Specification like many other languages do, the information is kind of splintered between the Python Language Reference, the Python Enhancement Proposals, as well as a lot of implicit institutional knowledge that only exists in the collective heads of the Python community

There are multiple Python implementations in common use today, and only one of them is written in C:

  • Brython is written in ECMAScript
  • IronPython is written in C#
  • Jython is written in Java
  • GraalPython is written in Java, using the Truffle Language Implementation Framework
  • PyPy is written in the RPython Programming Language (a statically typed language roughly at the abstraction level of Java, roughly with the performance of C, with syntax and runtime semantics that are a proper subset of Python) using the RPython Language Implementation Framework
  • CPython is written in C

In other words, every programming language is written in an older programming language. So what came first, and what was that coded in?

Again, you are confusing Programming Languages and Programming Language Implementations.

Programming Languages are written in English. Programming Language Implementations are written in Programming Languages. They can be written in any Programming Language. For example, Jython is a Python implementation written in Java. GHC is a Haskell implementation written in Haskell. GCC is a C compiler written in C. tsc is a TypeScript compiler written in TypeScript. rustc is a Rust compiler written in Rust. NSC is a Scala compiler written in Scala. javac is a Java compiler written in Java. Roslyn is a C# compiler written in C#.

And so on and so forth, there really is no restriction on the language used to implement a compiler or interpreter. (There is a theoretical limitation in that an interpreter for a Turing-complete language must also be written in a Turing-complete language.)

answered Oct 15, 2020 at 7:55

Jörg W Mittag's user avatar

Jörg W MittagJörg W Mittag

101k24 gold badges217 silver badges316 bronze badges

8

Each machine has an instruction set it natively executes.

That instruction set is the first language.

The first higher level language was assembly, literally allowing the programmer to write a long expression like mov ax bx instead of the corresponding binary word.

The first compiler was written in machine language, though more accurately it would have been called an assembler but today’s standards. It would have taken the assembly language and translated it to the binary encoding.

This has happened many times over for many different machines until the first cross-compilers were developed that could rewrite a program into another machine language.

Even now though there are still languages who are first implemented in terms of a machine language.

answered Oct 15, 2020 at 0:32

Kain0_0's user avatar

Kain0_0Kain0_0

15.7k16 silver badges36 bronze badges

2

What is Programming? A Handbook for Beginners

Welcome to the amazing world of programming. This is one of the most useful and powerful skills that you can learn and use to make your visions come true.

In this handbook, we will dive into why programming is important, its applications, its basic concepts, and the skills you need to become a successful programmer.

You will learn:

  • What programming is and why it is important.
  • What a programming language is and why it is important.
  • How programming is related to binary numbers.
  • Real-world applications of programming.
  • Skills you need to succeed as a programmer.
  • Tips for learning how to code.
  • Basic programming concepts.
  • Types of programming languages.
  • How to contribute to open source projects.
  • And more…

Are you ready? Let’s begin! ✨  

main-image

Programming is essential for our everyday lives.

Did you know that computer programming is already a fundamental part of your everyday lives? Let’s see why. I’m sure that you will be greatly surprised.

Every time you turn on your smartphone, laptop, tablet, smart TV, or any other electronic device, you are running code that was planned, developed, and written by developers. This code creates the final and interactive result that you can see on your screen.

That is exactly what programming is all about. It is the process of writing code to solve a particular problem or to implement a particular task.

Programming is what allows your computer to run the programs you use every day and your smartphone to run the apps that you love. It is an essential part of our world as we know it.

Whenever you check your calendar, attend virtual conferences, browse the web, or edit a document, you are using code that has been written by developers.

«And what is code?» you may ask.

Code is a sequence of instructions that a programmer writes to tell a device (like a computer) what to do.

The device cannot know by itself how to handle a particular situation or how to perform a task. So developers are in charge of analyzing the situation and writing explicit instructions to implement what is needed.

To do this, they follow a particular syntax (a set of rules for writing the code).

A developer (or programmer) is the person who analyzes a problem and implements a solution in code.

Sounds amazing, right? It’s very powerful and you can be part this wonderful world too by learning how to code. Let’s see how.

You, as a developer.

Let’s put you in a developer’s shoes for a moment. Imagine that you are developing a mobile app, like the ones that you probably have installed on your smartphone right now.

What is the first thing that you would do?

Think about this for a moment.

The answer is…

Analyzing the problem. What are you trying to build?

As a developer, you would start by designing the layout of the app, how it will work, its different screens and functionality, and all the small details that will make your app an awesome tool for users around the world.

Only after you have everything carefully planned out, you can start to write your code. To do that, you will need to choose a programming language to work with. Let’s see what a programming language is and why they are super important.

🔸 What is a Programing Language?

what-is-a-programming-language

Logos of popular programming languages.

A programming language is a language that computers can understand.

We cannot just write English words in our program like this:

«Computer, solve this task!»

and hope that our computer can understand what we mean. We need to follow certain rules to write the instructions.

Every programming language has its own set of rules that determine if a line of code is valid or not. Because of this, the code you write in one programming language will be slightly different from others.

💡 Tip: Some programming languages are more complex than others but most of them share core concepts and functionality. If you learn how to code in one programming language, you will likely be able to learn another one faster.

Before you can start writing awesome programs and apps, you need to learn the basic rules of the programming language you chose for the task.

💡 Tip: a program is a set of instructions written in a programming language for the computer to execute. We usually write the code for our program in one or multiple files.

For example, this is a line of code in Python (a very popular programming language) that shows the message "Hello, World!":

print("Hello, World!")

But if we write the same line of code in JavaScript (a programming language mainly used for web development), we will get an error because it will not be valid.

To do something very similar in JavaScript, we would write this line of code instead:

console.log("Hello, World!");

Visually, they look very different, right? This is because Python and JavaScript have a different syntax and a different set of built-in functions.

💡 Tip: built-in functions are basically tasks that are already defined in the programming language. This lets us use them directly in our code by writing their names and by specifying the values they need.  

In our examples, print() is a built-in function in Python while console.log() is a function that we can use in JavaScript to see the message in the console (an interactive tool) if we run our code in the browser.

Examples of programming languages include Python, JavaScript, TypeScript, Java, C, C#, C++, PHP, Go, Swift, SQL, and R. There are many programming languages and most of them can be used for many different purposes.

💡 Tip: These were the most popular programming languages on the Stack Overflow Developer Survey 2022:

Screen-Shot-2022-12-02-at-9.06.50-PM

The 14 most popular programming languages among all respondents of the StackOverflow Developer Survey 2022. It’s a yearly survey that collects information about popular technologies and trends in the developer community.

There are many other programming languages (hundreds or even thousands!) but usually, you will learn and work with some of the most popular ones. Some of them have broader applications like Python and JavaScript while others (like R) have more specific (and even scientific) purposes.

This sounds very interesting, right? And we are only starting to talk about programming languages. There is a lot to learn about them and I promise you that if you dive deeper into programming, your time and effort will be totally worth it.

Awesome! Now that you know what programming is and what programming languages are all about, let’s see how programming is related to binary numbers.

🔹 Programming and Binary Numbers

When you think about programming, perhaps the first thing that comes to your mind is something like the below image, right? A sequence of 0s and 1s on your computer.

binary

Binary numbers are 0 and 1.

Programming is indeed related to binary numbers ( 0 and 1) but in an indirect way. Developers do not actually write their code using zeros and ones.

We usually write programs in a high-level programming language, a programming language with a syntax that recognizes specific words (called keywords), symbols, and values of different data types.

Basically, we write code in a way that humans can understand.

For example, these are the keywords that we can use in Python:

False               class               from                or
None                continue            global              pass
True                def                 if                  raise
and                 del                 import              return
as                  elif                in                  try
assert              else                is                  while
async               except              lambda              with
await               finally             nonlocal            yield
break               for                 not 
List of Python keywords.

Every programming language has its own set of keywords (words written in English). These keywords are part of the syntax and core functionality of the programming language.

But keywords are just common words in English, almost like the ones that we would find in a book.

That leads us to two very important questions:

  • How does the computer understand and interpret what we are trying to say?
  • Where does the binary number system come into play here?

The computer does not understand these words, symbols, or values directly.

When a program runs, the code that we write in a high-level programming language that humans can understand is automatically transformed into binary code that the computer can understand.

11---binary-diagram

Process of transforming a program into binary code.

This transformation of source code that humans can understand into binary code that the computer can understand is called compilation.

According to Britannica, a compiler is defined as:

Computer software that translates (compiles) source code written in a high-level language (e.g., C++) into a set of machine-language instructions that can be understood by a digital computer’s CPU.

Britannica also mentions that:

The term compiler was coined by American computer scientist Grace Hopper, who designed one of the first compilers in the early 1950s.

Some programming languages can be classified as compiled programming languages while others can be classified as interpreted programming languages based on how to they are transformed into machine-language instructions.

However, they all have to go through a process that converts them into instructions that the computer can understand.

Awesome. Now you know why binary code is so important for computer science. Without it, basically programming would not exist because computers would not be able to understand our instructions.

Now let’s dive into the applications of programming and the different areas that you can explore.

🔸 Real-World Applications of Programming

applications

Programming has many real-world applications in basically every industry that you can imagine. 

Programming has many different applications in many different industries. This is truly amazing because you can apply your knowledge in virtually any industry that you are interested in.

From engineering to farming, from game development to physics, the possibilities are endless if you learn how to code.  

Let’s see some of them. (I promise you. They are amazing! ⭐) .

Front-End Web Development

1---frontend

Front-End Web Developers develop the parts of websites and web applications that users can see and interact with.

If you learn how to code, you can use your programming skills to design and develop websites and online platforms. Front-End Web Developers create the parts of the websites that users can see and interact with directly.

For example, right now you are reading an article on freeCodeCamp’s publication. The publication looks like this and it works like this thanks to code that front-end web developers wrote line by line.

💡 Tip: If you learn front-end web development, you can do this too.

Screen-Shot-2022-12-02-at-9.56.43-PM

The HTML and CSS code for freeCodeCamp’s Home Page (This is a preview of the code in Chrome Developer Tools).

Front-End Web Developers use HTML and CSS to create the structure of the website (these are markup languages, which are used to present information) and they write JavaScript code to add functionality and interactivity.

If you are interested in learning front-end web development, you can learn HTML and CSS with these free courses on freeCodeCamp’s YouTube Channel:

  • Learn HTML5 and CSS3 From Scratch — Full Course
  • Learn HTML & CSS – Full Course for Beginners
  • Frontend Web Development Bootcamp Course (JavaScript, HTML, CSS)
  • Introduction To Responsive Web Design — HTML & CSS Tutorial

You can also learn JavaScript for free with these free online courses:

  • Learn JavaScript — Full Course for Beginners
  • JavaScript Programming — Full Course
  • JavaScript DOM Manipulation – Full Course for Beginners
  • Learn JavaScript by Building 7 Games — Full Course

💡 Tip: You can also earn a Responsive Web Design Certification while you learn with interactive exercises on freeCodeCamp.

Back-End Web Development

2---backend

Back-End Web Developers develop servers and databases to handle everything that runs behind the scenes to make more complex web applications work correctly.

More complex and dynamic web applications that work with user data also require a server. This is a computer program that receives requests and sends appropriate responses. They also need a database, a collection of values stored in a structured way.

Back-End Web Developers are in charge of developing the code for these servers. They decide how to handle the different requests, how to send appropriate resources, how to store the information, and basically how to make everything that runs behind the scenes work smoothly and efficiently.

A real-world example of back-end web development is what happens when you create an account on freeCodeCamp and complete a challenge. Your information is stored on a database and you can access it later when you sign in with your email and password.

Screen-Shot-2022-12-02-at-10.07.41-PM

A freeCodeCamp interactive challenge.

This amazing interactive functionality was implemented by back-end web developers.

💡 Tip: Full-stack Web Developers are in charge of both Front-End and Back-End Web Development. They have specialized knowledge on both areas.

All the complex platforms that you use every day, like social media platforms, online shopping platforms, and educational platforms, use servers and back-end web development to power their amazing functionality.

Python is an example of a powerful programming language used for this purpose. This is one of the most popular programming languages out there, and its popularity continues to rise every year. This is partly because it is simple and easy to learn and yet powerful and versatile enough to be used in real-world applications.

💡 Tip: if you are curious about the specific applications of Python, this is an article I wrote on this topic.

JavaScript can also be used for back-end web development thanks to Node.js.

Other programming languages used to develop web servers are PHP, Ruby, C#, and Java.

If you would like to learn Back-End Web Development, these are free courses on freeCodeCamp’s YouTube channel:

  • Python Backend Web Development Course (with Django)
  • Node.js and Express.js — Full Course
  • Full Stack Web Development for Beginners (Full Course on HTML, CSS, JavaScript, Node.js, MongoDB)
  • Node.js / Express Course — Build 4 Projects

💡 Tip: freeCodeCamp also has a free Back End Development and APIs certification.

Mobile App Development

3---mobile-apps

Mobile app developers design and develop the mobile apps we use every day.

Mobile apps have become part of our everyday lives. I’m sure that you could not imagine life without them.

Think about your favorite mobile app. What do you love about it?

Our favorite apps help us with our daily tasks, they entertain us, they solve a problem, and they help us to achieve our goals. They are always there for us.

That is the power of mobile apps and you can be part of this amazing world too if you learn mobile app development.

Developers focused on mobile app development are in charge of planning, designing, and developing the user interface and functionality of these apps. They identify a gap in the existing apps and they try to create a working product to make people’s lives better.

💡 Tip: regardless of the field you choose, your goal as a developer should always be making people’s lives better. Apps are not just apps, they have the potential to change our lives. You should always remember this when you are planning your projects. Your code can make someone’s life better and that is a very important responsibility.

Mobile app developers use programming languages like JavaScript, Java, Swift, Kotlin, and Dart. Frameworks like Flutter and React Native are super helpful to build cross-platform mobile apps (that is, apps that run smoothly on multiple different operating systems like Android and iOS).

According to Flutter’s official documentation:

Flutter is an open source framework by Google for building beautiful, natively compiled, multi-platform applications from a single codebase.

If you would like to learn mobile app development, these are free courses that you can take on freeCodeCamp’s YouTube channel:

  • Flutter Course for Beginners – 37-hour Cross Platform App Development Tutorial
  • Flutter Course — Full Tutorial for Beginners (Build iOS and Android Apps)
  • React Native — Intro Course for Beginners
  • Learn React Native Gestures and Animations — Tutorial

Game Development

4---games

Games create long-lasting memories. I’m sure that you still remember your favorite games and why you love (or loved) them so much. Being a game developer means having the opportunity of bringing joy and entertainment to players around the world.

Game developers envision, design, plan, and implement the functionality of a game. They also need to find or create assets such as characters, obstacles, backgrounds, music, sound effects, and more.

💡 Tip: if you learn how to code, you can create your own games. Imagine creating an awesome and engaging game that users around the world will love. That is what I personally love about programming. You only need your computer, your knowledge, and some basic tools to create something amazing.

Popular programming languages used for game development include JavaScript, C++, Python, and C#.

If you are interested in learning game development, you can take these free courses on freeCodeCamp’s YouTube channel:

  • JavaScript Game Development Course for Beginners
  • Learn JavaScript by Building 7 Games — Full Course
  • Learn Unity — Beginner’s Game Development Tutorial
  • Learn Python by Building Five Games — Full Course
  • Code a 2D Game Using JavaScript, HTML, and CSS (w/ Free Game Assets) – Tutorial
  • 2D Game Development with GDevelop — Crash Course
  • Pokémon Coding Tutorial — CS50’s Intro to Game Development

Biology, Physics, and Chemistry

5---biology-and-science

Programming can be applied in every scientific field that you can imagine, including biology, physics, chemistry, and even astronomy. Yes! Scientists use programming all the time to collect and analyze data. They can even run simulations to test hypotheses.

Biology

In biology, computer programs can simulate population genetics and population dynamics. There is even an entire field called bioinformatics.

According to this article «Bioinformatics» by Ardeshir Bayat, member of the Centre for Integrated Genomic Medical Research at the University of Manchester:

Bioinformatics is defined as the application of tools of computation and analysis to the capture and interpretation of biological data.

Dr. Bayat mentions that bioinformatics can be used for genome sequencing. He also mentions that its discoveries may lead to drug discoveries and individualized therapies.

Frequently used programming languages for bioinformatics include Python, R, PHP, PERL, and Java.

💡 Tip: R is a programming «language and environment for statistical computing and graphics» (source).

An example of a great tool that scientists can use for biology is Biopython. This is a Python framework with «freely available tools for biological computation.»

If you would like to learn more about how you can apply your programming skills in science, these are free courses that you can take on freeCodeCamp’s YouTube channel:

  • Python for Bioinformatics — Drug Discovery Using Machine Learning and Data Analysis
  • R Programming Tutorial — Learn the Basics of Statistical Computing
  • Learn Python — Full Course for Beginners [Tutorial]

Physics

Physics requires running many simulations and programming is perfect for doing exactly that. With programming, scientists can program and run simulations based on specific scenarios that would be hard to replicate in real life. This is much more efficient.

Programming languages that are commonly used for physics simulations include C, Java, Python, MATLAB, and JavaScript.  

Chemistry

Chemistry also relies on simulations and data analysis, so it’s a field where programming can be a very helpful tool.

In this scientific article by Dr. Ivar Ugi and his colleagues from Organisch-chemisches Institut der Technischen Universität München, they mention that:

The design of entirely new syntheses, and the classification and documentation of structures, substructures, and reactons are examples of new applications of computers to chemistry.

Scientific experiments also generate detailed data and results that can be analyzed with computer programs developed by scientists.  

Think about it: writing a program to generate a box plot or a scatter plot or any other type of plot to visualize trends in thousands of measurements can save researchers a lot of time and effort. This lets them focus on the most important part of their work: analyzing the results.

Screen-Shot-2022-12-04-at-10.40.43-AM

Example of data visualizations that you can create with Seaborn, a Python data visualization library. This is be very helpful to analyze data, right?

💡 Tips: if you are interested in diving deeper into this, this is a list of chemistry simulations by the American Chemical Society. These simulations were programmed by developers and they are helping thousands of students and teachers around the world.

Think about it…You could build the next great simulation. If you are interested in a scientific field, I totally recommend learning how to code. Your work will be much more productive and your results will be easier to analyze.

If you are interested in learning programming for scientific applications, these are free courses on freeCodeCamp’s YouTube channel:

  • Python for Bioinformatics — Drug Discovery Using Machine Learning and Data Analysis
  • Python for Data Science — Course for Beginners (Learn Python, Pandas, NumPy, Matplotlib)

Data Science and Engineering

6---engineering-2

Talking about data…programming is also essential for a field called Data Science. If you are interested in answering questions through data and statistics, this field might be exactly what you are looking for and having programming skills will help you to achieve your goals.

Data scientists collect and analyze data in order to answer questions in many different fields. According to UC Berkeley in the article «What is Data Science?»:

Effective data scientists are able to identify relevant questions, collect data from a multitude of different data sources, organize the information, translate results into solutions, and communicate their findings in a way that positively affects business decisions.

There are many powerful programming languages for analyzing and visualizing data, but perhaps one of the most frequently used ones for this purpose is Python.

This is an example of the type of data visualizations that you can create with Python. They are very helpful to analyze data visually and you can customize them to your fit needs.

image-6

Sample data visualizations from the Matplotlib and Seaborn galleries

If you are interested in learning programming for data science, these are free courses on freeCodeCamp’s YouTube channel:

  • Learn Data Science Tutorial — Full Course for Beginners
  • Intro to Data Science — Crash Course for Beginners
  • Python for Data Science — Course for Beginners (Learn Python, Pandas, NumPy, Matplotlib)
  • Build 12 Data Science Apps with Python and Streamlit — Full Course
  • Data Analysis with Python — Full Course for Beginners (Numpy, Pandas, Matplotlib, Seaborn)

💡 Tip: you can also earn these free certifications on freeCodeCamp:

  • Data Visualization
  • Data Analysis with Python

Engineering

Engineering is another field where programming can help you to succeed. Being able to write your own computer programs can make your work much more efficient.

There are many tools created specifically for engineers. For example, the R programming language is specialized in statistical applications and Python is very popular in this field too.

Another great tool for programming in engineering is MATLAB. According to its official website:

MATLAB is a programming and numeric computing platform used by millions of engineers and scientists to analyze data, develop algorithms, and create models.

Really, the possibilities are endless.

You can learn MATLAB with this crash course on the freeCodeCamp YouTube channel.

If you are interested in learning engineering tools related to programming, this is a free course on freeCodeCamp’s YouTube channel that covers AutoCAD, a 2D and 3D computer-aided design software used by engineers:

  • AutoCAD for Beginners — Full University Course

Medicine and Pharmacology

7---medicine-an-pharmachology

Programming has helped scientists to develop new medical techniques and devices.

Medicine and pharmacology are constantly evolving by finding new treatments and procedures. Let’s see how you can apply your programming skills in these fields.

Medicine

Programming is really everywhere. If you are interested in the field of medicine, learning how to code can be very helpful for you too. Even if you would like to focus on computer science and software development, you can apply your knowledge in both fields.

Specialized developers are in charge of developing and writing the code that powers and controls the devices and machines that are used by modern medicine.

Think about it…all these machines and devices are controlled by software and someone has to write that software. Medical records are also stored and tracked by specialized systems created by developers. That could be you if you decide to follow this path. Sounds exciting, right?

According to the scientific article Application of Computer Techniques in Medicine:

Major uses of computers in medicine include hospital information system, data analysis in medicine, medical imaging laboratory computing, computer assisted medical decision making, care of critically ill patients, computer assisted therapy and so on.

Pharmacology

Programming and computer science can also be applied to develop new drugs in the field of pharmacology.

A remarkable example of what you can achieve in this field by learning how to code is presented in this article by MIT News. It describes how an MIT senior, Kristy Carpenter, was using computer science in 2019 to develop «new, more affordable drugs.» Kristy mentions that:

Artificial intelligence, which can help compute the combinations of compounds that would be better for a particular drug, can reduce trial-and-error time and ideally quicken the process of designing new medicines.

Another example of a real-world application of programming in pharmacology is related to Python (yes, Python has many applications!). Among its success stories, we find that Python was selected by AstraZeneca to develop techniques and programs that can help scientists to discover new drugs faster and more efficiently.

The documentation explains that:

To save time and money on laboratory work, experimental chemists use computational models to narrow the field of good drug candidates, while also verifying that the candidates to be tested are not simple variations of each other’s basic chemical structure.

If you are interested in learning programming for medicine or health-related fields, this is a free course on freeCodeCamp’s YouTube channel on programming for healthcare imaging:

  • PyTorch and Monai for AI Healthcare Imaging — Python Machine Learning Course

Education

8---education

Programming can be used to create tools that help teachers and students to have a more productive and engaging learning experience. Teaching students to code also develops their problem-solving skills.

Have you ever thought that programming could be helpful for education? Well, let me tell you that it is and it is very important. Why? Because the digital learning tools that students and teachers use nowadays are programmed by developers.

Every time a student opens an educational app, browses an educational platform like freeCodeCamp, writes on a digital whiteboard, or attends a class through an online meeting platform, programming is making that possible.

As a programmer or as a teacher who knows how to code, you can create the next great app that will enhance the learning experience of students around the world.

Perhaps it will be a note-taking app, an online learning platform, a presentation app, an educational game, or any other app that could be helpful for students.

The important thing is to create it with students in mind if your goal is to make something amazing that will create long-lasting memories.

If you envision it, then you can create it with code.  

Teachers can also teach their students how to code to develop their problem-solving skills and to teach them important skills for their future.

💡 Tip: if you are teaching students how to code, Scratch is a great programming language to teach the basics of programming. It is particularly focused on teaching children how to code in an interactive way.

According to the official Scratch website:

Scratch is the world’s largest coding community for children and a coding language with a simple visual interface that allows young people to create digital stories, games, and animations.

If you are interested in learning how to code for educational purposes, these are courses that you may find helpful on freeCodeCamp’s YouTube channel:

  • Scratch Tutorial for Beginners — Make a Flappy Bird Game
  • Computational Thinking & Scratch — Intro to Computer Science — Harvard’s CS50 (2018)
  • Android Development for Beginners — Full Course
  • Flutter Course for Beginners – 37-hour Cross Platform App Development Tutorial
  • Learn Unity — Beginner’s Game Development Tutorial

Machine Learning, Artificial Intelligence, and Robotics

9---robotics

Machine Learning and Artificial Intelligence are very popular nowadays because platforms can learn how users engage with their content in order to suggest them relevant information and products.

Some of the most amazing fields that are directly related to programming are Machine Learning, Artificial Intelligence, and Robotics. Let’s see why.

Artificial Intelligence is defined by Britannica as:

The project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

Machine learning is a branch or a subset of the field of Artificial Intelligence in which systems can learn on their own based on data. The goal of this learning process is to predict the expected output. These models continuously learn how to «think» and how to analyze situations based on their previous training.

The most commonly used programming languages in these fields are Python, C, C#, C++, and MATLAB.

Artificial intelligence and Machine Learning have amazing applications in various industries, such as:

  • Image and object detection.
  • Making predictions based on patterns.
  • Text recognition.
  • Recommendation engines (like when an online shopping platform shows you products that you may like or when YouTube shows you videos that you may like).
  • Spam detection for emails.
  • Fraud detection.
  • Social media features like personalized feeds.
  • Many more… there are literally millions of applications in virtually every industry.

If you are interested in learning how to code for Artificial Intelligence and Machine Learning, these are free courses on freeCodeCamp’s YouTube channel:

  • Machine Learning for Everybody – Full Course
  • Machine Learning Course for Beginners
  • PyTorch for Deep Learning & Machine Learning – Full Course
  • TensorFlow 2.0 Complete Course — Python Neural Networks for Beginners Tutorial
  • Self-Driving Car with JavaScript Course – Neural Networks and Machine Learning
  • Python TensorFlow for Machine Learning – Neural Network Text Classification Tutorial
  • Practical Deep Learning for Coders — Full Course from fast.ai and Jeremy Howard
  • Deep Learning Crash Course for Beginners
  • Advanced Computer Vision with Python — Full Course

💡 Tip: you can also earn a Machine Learning with Python Certification on freeCodeCamp.

Robotics

Programming is also very important for robotics. Yes, robots are programmed too!

Robotics is defined by Britannica as the:

Design, construction, and use of machines (robots) to perform tasks done traditionally by human beings.

Robots are just like computers. They do not know what to do until you tell them what to do by writing instructions in your programs. If you learn how to code, you can program robots and industrial machinery found in manufacturing facilities.

If you are interested in learning how to code for robotics, electronics, and related fields, this is a free course on Arduino on freeCodeCamp’s YouTube channel:

  • Arduino Course for Beginners — Open-Source Electronics Platform

Other Applications

There are many other fascinating applications of programming in almost every field. These are some highlights:

  • Agriculture: in this article by MIT News, a farmer developed an autonomous tractor app after learning how to code.
  • Self-driving cars: autonomous cars rely on software to analyze their surroundings and to make quick and accurate decisions on the road. If you are interested in this area, this is a course on this topic on freeCodeCamp’s YouTube channel.
  • Finance: programming can also be helpful to develop programs and models that predict financial indicators and trends. For example, this is a course on algorithmic trading on freeCodeCamp’s YouTube channel.

The possibilities are endless. I hope that this section will give you a notion of why learning how to code is so important for your present and for your future. It will be a valuable skill to have in any field you choose.

Awesome. Now let’s dive into the soft skills that you need to become a successful programmer.

🔹 Skills of a Successful Programmer

skills

After going through the diverse range of applications of programming, you must be curious to know what skills are needed to succeed in this field.

Curiosity

A programmer should be curious. Whether you are just starting to learn how to code or you already have 20 years of experience, coding projects will always present you with new challenges and learning opportunities. If you take these opportunities, you will continously improve your skills and succeed.

Enthusiasm

Enthusiasm is a key trait of a successful programmer but this applies in general to any field if you want to succeed. Enthusiasm will keep you happy and curious about what you are creating and learning.

💡 Tip: If you ever feel like you are not as enthusiastic as you used to be, it’s time to find or learn something new that can light the spark in you again and fill you with hope and dreams.

Patience

A programmer must be patient because transforming an initial idea into a working product can take time, effort, and many different steps. Patience will keep you focused on your final goal.  

Resilience

Programming can be challenging. That is true. But what defines you is not how many challenges you face, it’s how you face them. If you thrive despite these challenges, you will become a better programmer and you could create something that could change the world.

Creativity

Programmers must be creative because even though every programming language has a particular set of rules for writing the code, coding is like using LEGOs. You have the building-blocks but you need to decide what to create and how to create it. The process of writing the code requires creativity while following the established best practices.

Problem-solving and Analysis

Programming is basically analyzing and solving problems with code. Depending on your field of choice, those problems will be simpler or more complex but they will all require some level of problem-solving skills and a thorough analysis of the situation.

Questions like:

  • What should I build?
  • How can I build it?
  • What is the best way to build this?

Are part of the everyday routine of a programmer.

Ability to Focus for Long Periods of Time

When you are working on a coding project, you will need to focus on a task for long periods of time. From creating the design, to planning and writing the code, to testing the result, and to fixing bugs (issues with the code), you will dedicate many hours to a particular task. This is why it’s essential to be able to focus and to keep your final goal in mind.

Taking Detailed Notes

This skill is very important for programmers, particularly when you are learning how to code. Taking detailed notes can be help you to understand and remember the concepts and tools you learn. This also applies for experienced programmers, since being a programmer involves life-long learning.

Communication

Initially, you might think that programming is a solitary activity and imagine that a programmer spends hundreds of hours alone sitting on a desk.

But the reality is that when you find your first job, you will see that communication is super important to coordinate tasks with other team members and to exchange ideas and feedback.

Open to Feedback

In programming, there is usually more than one way to implement the same functionality. Different alternatives may work similarly, but some may be easier to read or more efficient in terms of time or resource consumption.

When you are learning how to code, you should always take constructive feedback as a tool for learning. Similarly, when you are working on a team, take your colleagues’ feedback positively and always try to improve.

Life-long Learning

Programming equals life-long learning. If you are interested in learning how to code, you must know that you will always need to be learning new things as new technologies emerge and existing technologies are updated. Think about it… that is great because there is always something interesting and new to learn!

Open to Trying New Things

Finally, an essential skill to be a successful programmer is to be open to trying new things. Step out of your comfort zone and be open to new technologies and products. In the technology industry, things evolve very quickly and adapting to change is essential.

🔸 Tips for Learning How to Code

tips

Now that you know more about programming, programming languages, and the skills you need to be a successful programmer, let’s see some tips for learning how to code.

💡 Tip: these tips are based on my personal experience and opinions.

  • Choose one programming language to learn first. When you are learning how to code, it’s easy to feel overwhelmed with the number of options and entry paths. My advice would be to focus on understanding the essential computer science concepts and one programming language first. Python and JavaScript are great options to start learning the fundamentals.
  • Take detailed notes. Note-taking skills are essential to record and to analyze the topics you are learning. You can add custom comments and annotations to explain what you are learning.
  • Practice constantly. You can only improve your problem-solving skills by practicing and by learning new techniques and tools. Try to practice every day.

💡 Tip: There is a challenge called the #100DaysOfCode challenge that you can join to practice every day.  

  • Always try again. If you can’t solve a problem on your first try, take a break and come back again and again until you solve it. That is the only way to learn. Learn from your mistakes and learn new approaches.
  • Learn how to research and how to find answers. Programming languages, libraries, and frameworks usually have official documentations that explain their built-in elements and tools and how you can use them. This is a precious resource that you should definitely refer to.
  • Browse Stack Overflow. This is an amazing platform. It is like an online encyclopedia of answers to common programming questions. You can find answers to existing questions and ask new questions to get help from the community.
  • Set goals. Motivation is one of the most important factors for success. Setting goals is very important to keep you focused, motivated, and enthusiastic. Once you reach your goals, set new ones that you find challenging and exciting.
  • Create projects. When you are learning how to code, applying your skills will help you to expand your knowledge and remember things better. Creating projects is the perfect way to practice and to create a portfolio that you can show to potential employers.

🔹 Basic Programming Concepts

basic-concepts

Great. If reading this article has helped you confirm that you want to learn programming, let’s take your first steps.

These are some basic programming concepts that you should know:

  • Variable: a variable is a name that we assign to a value in a computer program. When we define a variable, we assign a value to a name and we allocate a space in memory to store that value. The value of a variable can be updated during the program.
  • Constant: a constant is similar to a variable. It stores a value but it cannot be modified. Once you assign a value to a constant, you cannot change it during the entire program.
  • Conditional: a conditional is a programming structure that lets developers choose what the computer should do based on a condition. If the condition is True, something will happen but if the condition is False, something different can happen.
  • Loop: a loop is a programming structure that let us run a code block (a sequence of instructions) multiple times. They are super helpful to avoid code repetition and to implement more complex functionality.
  • Function: a function helps us to avoid code repetition and to reuse our code. It is like a code block to which we assign a name but it also has some special characteristics. We can write the name of the function to run that sequence of instructions without writing them again.

💡 Tip: Functions can communicate with main programs and main programs can communicate with functions through parameters, arguments, and return statements.

  • Class: a class is used as a blueprint to define the characteristics and functionality of a type of object. Just like we have objects in our real world, we can represent objects in our programs.
  • Bug: a bug is an error in the logic or implementation of a program that results in an unexpected or incorrect output.
  • Debugging: debugging is the process of finding and fixing bugs in a program.
  • IDE: this acronym stands for Integrated Development Environment. It is a software development environment that has the most helpful tools that you will need to write computer programs such as a file editor, an explorer, a terminal, and helpful menu options.

💡 Tip: a commonly used and free IDE is Visual Studio Code, created by Microsoft.

Awesome! Now you know some of the fundamental concepts in programming. Like you learned, each programming language has a different syntax, but they all share most of these programming structures and concepts.  

🔸 Types of Programming Languages

types-of-programming-languages

Programming languages can be classified based on different criteria. If you want to learn how to code, it’s important for you to learn these basic classifications:

Complexity

  • High-level programming languages: they are designed to be understood by humans and they have to be converted into machine code before the computer can understand them. They are the programming languages that we commonly use. For example: JavaScript, Python, Java, C#, C++, and Kotlin.
  • Low-level programming languages: they are more difficult to understand because they are not designed for humans. They are designed to be understood and processed efficiently by machines.

Conversion into Machine Code

  • Compiled programming languages: programs written with this type of programming language are converted directly into machine code by a compiler. Examples include C, C++, Haskell, and Go.
  • Interpreted programming languages: programs written with this type of programming language rely on another program called the interpreter, which is in charge of running the code line by line. Examples include Python, JavaScript, PHP, and Ruby.

💡 Tip: according to this article on freeCodeCamp’s publication:

Most programming languages can have both compiled and interpreted implementations – the language itself is not necessarily compiled or interpreted. However, for simplicity’s sake, they’re typically referred to as such.

There are other types of programming languages based on different criteria, such as:

  • Procedural programming languages
  • Functional programming languages
  • Object-oriented programming languages
  • Scripting languages
  • Logic programming languages

And the list of types of programming languages continues. This is very interesting because you can analyze the characteristics of a programming language to help you choose the right one for your project.

🔹 How to Contribute to Open Source Projects

Screen-Shot-2022-12-04-at-4.53.42-PM

GitHub’s Home Page.

Finally, you might think that coding implies sitting at a desk for many hours looking at your code without any human interaction. But let me tell you that this does not have to be true at all. You can be part of a learning community or a developer community.

Initially, when you are learning how to code, you can participate in a learning community like freeCodeCamp. This way, you will share your journey with others who are learning how to code, just like you.

Then, when you have enough skills and confidence in your knowledge, you can practice by contributing to open source projects and join developer communities.

Open source software is defined by Opensource.com as:

Software with source code that anyone can inspect, modify, and enhance.

GitHub is an online platform for hosting projects with version control. There, you can find many open source projects (like freeCodeCamp) that you can contribute to and practice your skills.

💡 Tip: many open source projects welcome first-time contributions and contributions from all skill levels. These are great opportunities to practice your skills and to contribute to real-world projects.  

Screen-Shot-2022-12-04-at-5.01.58-PM

freeCodeCamp’s GitHub repository.

Contributing to open source projects on GitHub is great to acquire new experience working and communicating with other developers. This is another important skill for finding a job in this field.

Screen-Shot-2022-12-04-at-5.06.54-PM

GitHub tracks your contributions and shows them on your profile. These are shown as interactive gray and green squares that represent days of the current year. A darker shade of green means that more contributions were made on that day. Image taken from this GitHub article.

Working on a team is a great experience. I totally recommend it once you feel comfortable enough with your skills and knowledge.

You did it! You reached the end of this article. Great work. Now you know what programming is all about. Let’s see a brief summary.

🔸 In Summary

  • Programming is a very powerful skill. If you learn how to code, you can make your vision come true.
  • Programming has many different applications in many different fields. You can find an application for programming in basically any field you choose.
  • Programming languages can be classified based on different criteria and they share basic concepts such as variables, conditionals, loops, and functions.
  • Always set goals and take detailed notes. To succeed as a programmer, you need to be enthusiastic and consistent.

Thank you very much for reading my article. I hope you liked it and found it helpful. Now you know why you should learn how to code.

🔅 I invite you to follow me on Twitter (@EstefaniaCassN) and YouTube (Coding with Estefania) to find coding tutorials.



Learn to code for free. freeCodeCamp’s open source curriculum has helped more than 40,000 people get jobs as developers. Get started

Vocabulary

  1. Match the words with their definitions:

  1. epiphany
    (n.)

[ɪ’pɪfənɪ]

  1. existing,
    happening, or done at the same time

  1. constraint
    (n.)

[kənstre͟ɪnt]

  1. having
    a tendency to be affected by it or to do it

  1. encumber
    (v.)

[ɪn’kʌmbə]

  1. a
    sudden realization of great truth

  1. concurrent
    (adj.)

[kən’kʌr(ə)nt]

  1. a
    limitation or restriction

  1. prone
    (adj.)

[pro͟ʊn]

  1. restrict
    or impede (someone or something) in such a way that free action
    or movement is difficult

  1. compile

[kəm’paɪl]

  1. include
    or contain (something) as a constituent part

  1. linker

[‘liŋkər]

  1. convert
    (a program) into a machine-code or lower-level form in which the
    program can be executed

  1. embody

[ɪm’bɔdɪ],
[em’bɔdɪ]

  1. a
    program used with a compiler or assembler to provide links to
    the libraries needed for an executable program

Before
you read

  1. Discuss with your partner the following questions.

  • What
    do you know about programming languages and paradigms?

  • Is there any difference? Which one if any?

  • What
    are the reasons for using programming languages and paradigms?

  1. Skim the text to check your ideas.

READING

What is what?

I
n
this article, we will discuss programming languages and paradigms so
that you have a complete understanding. Let us first inspect if
there any difference is.

The
difference between programming paradigms and programming languages
is that programming language
is an artificial language that has vocabulary and sets of
grammatical rules to instruct a computer to perform specific tasks.
Programing paradigm
is a particular way (i.e., a ‘school of
thought’) of looking at a programming problem.

T
he
term programming language
usually refers to high-level languages, such as BASIC, C, C++,
COBOL, FORTRAN, Ada, and Pascal. Each language has a unique set of
keywords (words that it understands) and a special syntax
for organizing program instructions.
High-level programming languages, while simple compared to human
languages, are more complex than the languages the computer actually
understands, called machine languages.
Each different type of CPU has its own unique machine language.
Assembly languages
are lying between machine languages and high-level languages.
Assembly languages are similar to machine languages, but they are
much easier to program in because they allow a programmer to
substitute names for numbers. Machine languages consist of numbers
only. Lying above high-level languages are languages called
fourth-generation languages
(usually abbreviated 4GL).
4GLs are far removed from machine languages and represent the class
of computer languages closest to human languages. Regardless of what
language you use, you eventually need to convert your program into
machine language so that the computer can understand it. There are
two ways to do this: compile
the program and interpret
the program. A program that executes instructions is
written in a high-level language.
There are two ways to run programs written in a high-level language.
The most common is to compile the program. To transform a program
written in a high-level programming language from source
code into object
code. Programmers write programs in a form
called source code. Source code must go through several steps before
it becomes an executable program. The first step is to pass the
source code through a compiler,
which translates the high-level language instructions into object
code. The final step in producing an executable program — after the
compiler has produced object code — is to pass the object code
through a linker.
The linker combines modules and gives real values to all symbolic
addresses, thereby producing machine code.

The
other method is to pass the program through an interpreter. An
interpreter translates high-level instructions into an intermediate
form, which it then executes. In contrast, a compiler translates
high-level instructions directly into machine language. Compiled
programs generally run faster than interpreted programs. The
advantage of an interpreter, however, is that it does not need to go
through the compilation stage during which machine instructions are
generated. This process can be time-consuming if the program is
long. The interpreter, on the other hand, can immediately execute
high-level programs. For this reason, interpreters are sometimes
used during the development of a program, when a programmer wants to
add small sections at a time and test them quickly. In addition,
interpreters are often used in education because they allow students
to program interactively. Both interpreters and compilers are
available for most high-level languages. However, BASIC and LISP are
especially designed to be executed by an interpreter. In addition,
page description languages, such as PostScript, use an interpreter.
Every PostScript printer, for example, has a built-in interpreter
that executes PostScript instructions. The question of which
language is best is one that consumes a lot of time and energy among
computer professionals. Every language has its strengths and
weaknesses. For example, FORTRAN is a particularly good language for
processing numerical data, but it does not lend itself very well to
organizing large programs. Pascal is very good for writing
well-structured and readable programs, but it is not as flexible as
the C programming language. C++ embodies
powerful object-oriented features, but it is complex and difficult
to learn. The choice of which language to use depends on the type of
computer the program is to run
on, what sort of program it is, and the expertise of the programmer.
Computer programmers have evolved from the early days of the bit
processing first generation languages into sophisticated logical
designers of complex software applications. Programming is a rich
discipline and practical programming languages are usually quite
complicated. Fortunately, the important ideas of programming
languages are simple.

Adapted
from
http://www.info.ucl.ac.be/~pvr/paradigms.htm

Usually,
the word «paradigm» is used to describe a thought pattern
or methodology that exists during a certain period of time. When
scientists refer to a scientific paradigm, they are talking about
the prevailing system of ideas that was dominant in a scientific
field at a point in time. When a person or field has a paradigm
shift
,
it means that they are no longer using the old methods of thought
and approach, but have decided on a new approach, often reached
through an epiphany.

Programming
paradigm is a framework that defines how
the user conceptualized and interprets complex problems. It is also
is a fundamental style or the logical
approach to programming a computer based on a mathematical theory or
a coherent set of principles used in software engineering to
implement a programming language. There
are currently 27 paradigms (see the chart above) exist in the world.
Most of them are of similar concepts extending from the 4 main
programming paradigms.

P
rogramming
languages should support many paradigms. Let
us name 4 main programming paradigms: the
imperative paradigm
,
the
functional paradigm
,
the
logical paradigm
,
the
object-oriented paradigm
.
Other possible programming paradigms are: the visual paradigm, one
of the parallel/
concurrent paradigms

and the constraint
based
paradigm. The
paradigms are not exclusive, but reflect the different emphasis of
language designers. Most practical languages embody features of more
than one paradigm.

Each
paradigm supports a set of concepts that makes it the best for a
certain kind of problem. For example, object-oriented programming is
best for problems with a large number of related data abstractions
organized in a hierarchy. Logic programming is best for transforming
or navigating complex symbolic structures according to logical
rules. Discrete synchronous programming is best for reactive
problems, i.e., problems that consist of reactions to sequences of
external events. Programming
paradigms are unique to each language within the computer
programming domain, and many programming
languages utilize multiple paradigms. The term paradigm
is best described as a «pattern or model.» Therefore, a
programming
paradigm
can be defined as a pattern or model used within a software
programming
language to create software applications. Languages that support
these three paradigms are given in a classification table below.

Imperative/Algorithmic

Declarative

Object-Oriented

Functional
Programming

Logic
Programming

Algol

Cobol

PL/1

Ada

C

Modula-3

Esterel

Lisp

Haskell

ML

Miranda

APL

Prolog

Smalltalk

Simula

C++

Java

Popular
mainstream languages such as Java or C++ support just one or two
separate paradigms. This is unfortunate, since different programming
problems need different programming concepts to solve them cleanly,
and those one or two paradigms often do not contain the right
concepts. A language should ideally support many concepts in a
well-factored way, so that the programmer can choose the right
concepts whenever they are needed without being encumbered
by the others. This style of programming is sometimes called
multiparadigm programming, implying that it is something exotic and
out of the ordinary.

Programming
languages are extremely logical and follow standard rules of
mathematics. Each language has a unique method for applying these
rules, especially around the areas of functions, variables, methods,
and objects. For example, programs written in C++ or Object Pascal
can be purely procedural, or purely object-oriented, or contain
elements of both paradigms. Software designers and programmers
decide how to use those paradigm elements. In object-oriented
programming, programmers can think of a program as a collection of
interacting objects, while in functional programming a program can
be thought of as a sequence of stateless function evaluations. When
programming computers or systems with many processors,
process-oriented programming allows programmers to think about
applications as sets of concurrent
processes acting upon logically shared data structures. Just as
different groups in software engineering advocate different
methodologies,
different programming languages advocate different programming
paradigms.
Some languages are designed to support one particular paradigm
(Smalltalk supports object-oriented programming, Haskell supports
functional programming), while other programming languages support
multiple paradigms (such as Object Pascal, C++, C#, Visual Basic,
Common Lisp, Scheme, Perl, Python, Ruby, Oz and F Sharp).

It
is helpful to understand the history of the programming
language and software in general to better grasp the concept of the
programming
paradigm. In the
early days of software development, software engineering was
completed by creating binary code or machine code, represented by 1s
and 0s. These binary manipulations caused programs to react in a
specified manner. This early computer programming is commonly
referred to as the «low-level» programming
paradigm. This
was a tedious and error prone
method for creating programs. Programming
languages quickly evolved into the «procedural» paradigm
or third generation languages including COBOL, Fortran, and BASIC.
These procedural programming languages define programs in a
step-by-step approach.

The
next evolution of programming
languages was to create a more logical approach to software
development, the «object oriented» programming
paradigm. This
approach is used by the programming
languages of Java™, Smalltalk, and Eiffel. This paradigm
attempts to abstract modules of a program into reusable objects.

In
addition to these programming
paradigms, there is also the «declarative» paradigm
and the «functional» paradigm.
While some programming
languages strictly enforce the use of a single paradigm,
many support multiple paradigms. Some examples of these types
include C++, C#, and Visual Basic®.

Each
paradigm
has unique requirements on the usage and abstractions of processes
within the programming
language. Nevertheless, Peter Van Roy says that understanding the
right concepts can help improve programming style even in languages
that do not directly support them, just as object-oriented
programming is possible in C with the right programmer attitude.

By
allowing developers flexibility within programming
languages, a programming
paradigm can be
utilized that best meets the business problem to be solved. As the
art of computer programming
has evolved, so too has the creation of the programming
paradigm. By
creating a framework of a pattern or model for system development,
programmers can create computer programs to be the most efficiency
within the selected paradigm.

LANGUAGE
DEVELOPMENT

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]

  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #

A programming language is a system of notation for writing computer programs.[1] Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language.

The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning), which are usually defined by a formal language. Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference. Some languages have both, with the basic language defined by a standard and extensions taken from the dominant implementation being common.

Programming language theory is the subfield of computer science that studies the design, implementation, analysis, characterization, and classification of programming languages.

Definitions[edit]

There are many considerations when defining what constitutes a programming language.

Computer languages vs programming languages[edit]

The term computer language is sometimes used interchangeably with programming language.[2] However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages.[3] Similarly, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.[4]
One way of classifying computer languages is by the computations they are capable of expressing, as described by the theory of computation. The majority of practical programming languages are Turing complete,[5] and all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet are often called programming languages.[6][7] However, some authors restrict the term «programming language» to Turing complete languages.[1][8]

Another usage regards programming languages as theoretical constructs for programming abstract machines and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources.[9] John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.[10]

Domain and target[edit]

In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way.[11] Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.

The domain of the language is also worth consideration. Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages.[12][13][14] Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete language entirely using XML syntax.[15][16][17] Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset.[18][19]

Abstractions[edit]

Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle.[20] This principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.[21]

History[edit]

Early developments[edit]

Very early computers, such as Colossus, were programmed without the help of a stored program, by modifying their circuitry or setting banks of physical controls.

Slightly later, programs could be written in machine language, where the programmer writes each instruction in a numeric form the hardware can execute directly. For example, the instruction to add the value in two memory locations might consist of 3 numbers: an «opcode» that selects the «add» operation, and two memory locations. The programs, in decimal or binary form, were read in from punched cards, paper tape, magnetic tape or toggled in on switches on the front panel of the computer. Machine languages were later termed first-generation programming languages (1GL).

The next step was the development of the so-called second-generation programming languages (2GL) or assembly languages, which were still closely tied to the instruction set architecture of the specific computer. These served to make the program much more human-readable and relieved the programmer of tedious and error-prone address calculations.

The first high-level programming languages, or third-generation programming languages (3GL), were written in the 1950s. An early high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945. However, it was not implemented until 1998 and 2000.[22]

John Mauchly’s Short Code, proposed in 1949, was one of the first high-level languages ever developed for an electronic computer.[23] Unlike machine code, Short Code statements represented mathematical expressions in an understandable form. However, the program had to be translated into machine code every time it ran, making the process much slower than running the equivalent machine code.

At the University of Manchester, Alick Glennie developed Autocode in the early 1950s. As a programming language, it used a compiler to automatically convert the language into machine code. The first code and compiler was developed in 1952 for the Mark 1 computer at the University of Manchester and is considered to be the first compiled high-level programming language.[24][25]

The second auto code was developed for the Mark 1 by R. A. Brooker in 1954 and was called the «Mark 1 Autocode». Brooker also developed an auto code for the Ferranti Mercury in the 1950s in conjunction with the University of Manchester. The version for the EDSAC 2 was devised by D. F. Hartley of University of Cambridge Mathematical Laboratory in 1961. Known as EDSAC 2 Autocode, it was a straight development from Mercury Autocode adapted for local circumstances and was noted for its object code optimization and source-language diagnostics which were advanced for the time. A contemporary but separate thread of development, Atlas Autocode was developed for the University of Manchester Atlas 1 machine.

In 1954, FORTRAN was invented at IBM by John Backus. It was the first widely used high-level general-purpose programming language to have a functional implementation, as opposed to just a design on paper.[26][27] It is still a popular language for high-performance computing[28] and is used for programs that benchmark and rank the world’s fastest supercomputers.[29]

Another early programming language was devised by Grace Hopper in the US, called FLOW-MATIC. It was developed for the UNIVAC I at Remington Rand during the period from 1955 until 1959. Hopper found that business data processing customers were uncomfortable with mathematical notation, and in early 1955, she and her team wrote a specification for an English programming language and implemented a prototype.[30] The FLOW-MATIC compiler became publicly available in early 1958 and was substantially complete in 1959.[31] FLOW-MATIC was a major influence in the design of COBOL, since only it and its direct descendant AIMACO were in actual use at the time.[32]

Refinement[edit]

The increased use of high-level languages introduced a requirement for low-level programming languages or system programming languages. These languages, to varying degrees, provide facilities between assembly languages and high-level languages. They can be used to perform tasks that require direct access to hardware facilities but still provide higher-level control structures and error-checking.

The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use:

  • APL introduced array programming and influenced functional programming.[33]
  • ALGOL refined both structured procedural programming and the discipline of language specification; the «Revised Report on the Algorithmic Language ALGOL 60» became a model for how later language specifications were written.
  • Lisp, implemented in 1958, was the first dynamically-typed functional programming language.
  • In the 1960s, Simula was the first language designed to support object-oriented programming; in the mid-1970s, Smalltalk followed with the first «purely» object-oriented language.
  • C was developed between 1969 and 1973 as a system programming language for the Unix operating system and remains popular.[34]
  • Prolog, designed in 1972, was the first logic programming language.
  • In 1978, ML built a polymorphic type system on top of Lisp, pioneering statically-typed functional programming languages.

Each of these languages spawned descendants, and most modern programming languages count at least one of them in their ancestry.

The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether programming languages should be designed to support it.[35] Edsger Dijkstra, in a famous 1968 letter published in the Communications of the ACM, argued that Goto statements should be eliminated from all «higher-level» programming languages.[36]

Consolidation and growth[edit]

A small selection of programming language textbooks

The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language derived from Pascal and intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating the so-called «fifth-generation» languages that incorporated logic programming constructs.[37] The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decades.

One important trend in language design for programming large-scale systems during the 1980s was an increased focus on the use of modules or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, which were often wedded to generic programming constructs.[38]

The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic websites. Java came to be used for server-side programming, and bytecode virtual machines became popular again in commercial settings with their promise of «Write once, run anywhere» (UCSD Pascal had been popular for a time in the early 1980s). These developments were not fundamentally novel; rather, they were refinements of many existing languages and paradigms (although their syntax was often based on the C family of programming languages).

Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration such as Microsoft’s LINQ.

Fourth-generation programming languages (4GL) are computer programming languages that aim to provide a higher level of abstraction of the internal computer hardware details than 3GLs. Fifth-generation programming languages (5GL) are programming languages based on solving problems using constraints given to the program, rather than using an algorithm written by a programmer.

Elements[edit]

All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

Syntax[edit]

A programming language’s surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages are more graphical in nature, using visual relationships between symbols to specify a program.

The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.

The programming language syntax is usually defined using a combination of regular expressions (for lexical structure) and Backus–Naur form (for grammatical structure). Below is a simple grammar, based on Lisp:

expression ::= atom | list
atom       ::= number | symbol
number     ::= [+-]?['0'-'9']+
symbol     ::= ['A'-'Z''a'-'z'].*
list       ::= '(' expression* ')'

This grammar specifies the following:

  • an expression is either an atom or a list;
  • an atom is either a number or a symbol;
  • a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;
  • a symbol is a letter followed by zero or more of any characters (excluding whitespace); and
  • a list is a matched pair of parentheses, with zero or more expressions inside it.

The following are examples of well-formed token sequences in this grammar: 12345, () and (a b c232 (1)).

Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language’s rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.

Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:

  • «Colorless green ideas sleep furiously.» is grammatically well-formed but has no generally accepted meaning.
  • «John is a married bachelor.» is grammatically well-formed but expresses a meaning that cannot be true.

The following C language fragment is syntactically correct, but performs operations that are not semantically defined (the operation *p >> 4 has no meaning for a value having a complex type and p->im is not defined because the value of p is the null pointer):

complex *p = NULL;
complex abs_p = sqrt(*p >> 4 + p->im);

If the type declaration on the first line were omitted, the program would trigger an error on the undefined variable p during compilation. However, the program would still be syntactically correct since type declarations provide only semantic information.

The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[39] Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution.[40] In contrast to Lisp’s macro system and Perl’s BEGIN blocks, which may contain general computations, C macros are merely string replacements and do not require code execution.[41]

Semantics[edit]

The term semantics refers to the meaning of languages, as opposed to their form (syntax).

Static semantics[edit]

A static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[1][failed verification] For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used (in languages that require such declarations) or that the labels on the arms of a case statement are distinct.[42] Many important restrictions of this type, like checking that identifiers are used in the appropriate context (e.g. not adding an integer to a function name), or that subroutine calls have the appropriate number and type of arguments, can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Newer programming languages like Java and C# have definite assignment analysis, a form of data flow analysis, as part of their static semantics.

Dynamic semantics[edit]

Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The dynamic semantics (also known as execution semantics) of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research went into formal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia.

Type system[edit]

A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, infer types, relieving the programmer from the need to write type annotations. The formal design and study of type systems is known as type theory.

Typed versus untyped languages[edit]

A language is typed if the specification of every operation defines types of data to which the operation is applicable.[43] For example, the data represented by "this text between the quotes" is a string, and in many programming languages dividing a number by a string has no meaning and will not be executed. The invalid operation may be detected when the program is compiled («static» type checking) and will be rejected by the compiler with a compilation error message, or it may be detected while the program is running («dynamic» type checking), resulting in a run-time exception. Many languages allow a function called an exception handler to handle this exception and, for example, always return «-1» as the result.

A special case of typed languages is the single-typed languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type[dubious – discuss]–—most commonly character strings which are used for both symbolic and numeric data.

In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths.[43] High-level untyped languages include BCPL, Tcl, and some varieties of Forth.

In practice, while few languages are considered typed from the type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[43] Many production languages provide means to bypass or subvert the type system, trading type safety for finer control over the program’s execution (see casting).

Static vis-à-vis dynamic typing[edit]

In static typing, all expressions have their types determined prior to when the program is executed, typically at compile-time. For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string or stored in a variable that is defined to hold dates.[43]

Statically-typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically-typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with functional languages such as Haskell and ML.[44] However, many manifestly-typed languages support partial type inference; for example, C++, Java, and C# all infer types in certain limited cases.[45] Additionally, some programming languages allow for some types to be automatically converted to other types; for example, an int can be used where the program expects a float.

Dynamic typing, also called latent typing, determines the type-safety of operations at run time; in other words, types are associated with run-time values rather than textual expressions.[43] As with type-inferred languages, dynamically-typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, potentially making debugging more difficult. Lisp, Smalltalk, Perl, Python, JavaScript, and Ruby are all examples of dynamically-typed languages.

Weak and strong typing[edit]

Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[43] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run time.

Strong typing prevents these program faults. An attempt to perform an operation on the wrong type of value raises an error.[43] Strongly-typed languages are often termed type-safe or safe.

An alternative definition for «weakly typed» refers to languages, such as Perl and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors. Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[46][47]

It may seem odd to some professional programmers that C could be «weakly, statically typed». However, the use of the generic pointer, the void* pointer, does allow casting pointers to other pointers without needing to do an explicit cast. This is extremely similar to somehow casting an array of bytes to any kind of datatype in C without using an explicit cast, such as (int) or (char).

Standard library and run-time system[edit]

Most programming languages have an associated core library (sometimes known as the «standard library», especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.

The line between a language and its core library differs from language to language. In some cases, the language designers may treat the library as a separate entity from the language. However, a language’s core library is often treated as part of the language by its users, and some language specifications even require that this library be made available in all implementations. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a «block») constructs an instance of the library’s BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.

Design and implementation[edit]

Programming languages share properties with natural languages related to their purpose as vehicles for communication, having a syntactic form separate from its semantics, and showing language families of related languages branching one from another.[48][49] But as artificial constructs, they also differ in fundamental ways from languages that have evolved through usage. A significant difference is that a programming language can be fully described and studied in its entirety since it has a precise and finite definition.[50] By contrast, natural languages have changing meanings given by their users in different communities. While constructed languages are also artificial languages designed from the ground up with a specific purpose, they lack the precise and complete semantic definition that a programming language has.

Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse. Although there have been attempts to design one «universal» programming language that serves all purposes, all of them have failed to be generally accepted as filling this role.[51] The need for diverse programming languages arises from the diversity of contexts in which languages are used:

  • Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.
  • Programmers range in expertise from novices who need simplicity above all else to experts who may be comfortable with considerable complexity.
  • Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.
  • Programs may be written once and not change for generations, or they may undergo continual modification.
  • Programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.

One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[52]


Natural-language programming has been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural-language programming as «foolish».[53] Alan Perlis was similarly dismissive of the idea.[54] Hybrid approaches have been taken in Structured English and SQL.

A language’s designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.

Specification[edit]

The specification of a programming language is an artifact that the language users and the implementors can use to agree upon whether a piece of source code is a valid program in that language, and if so what its behavior shall be.

A programming language specification can take several forms, including the following:

  • An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., as in the C language), or a formal semantics (e.g., as in Standard ML[55] and Scheme[56] specifications).
  • A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or formal language.
  • A reference or model implementation, sometimes written in the language being specified (e.g., Prolog or ANSI REXX[57]). The syntax and semantics of the language are explicit in the behavior of the reference implementation.

Implementation[edit]

An implementation of a programming language provides a way to write programs in that language and execute them on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.

The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach, there is no distinct boundary between compiling and interpreting. For instance, some implementations of BASIC compile and then execute the source one line at a time.

Programs that are executed directly on the hardware usually run much faster than those that are interpreted in software.[58][better source needed]

One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.

Proprietary languages[edit]

Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonly domain-specific languages or internal scripting languages for a single product; some proprietary languages are used only internally within a vendor, while others are available to external users.

Some programming languages exist on the border between proprietary and open; for example, Oracle Corporation asserts proprietary rights to some aspects of the Java programming language,[59] and Microsoft’s C# programming language, which has open implementations of most parts of the system, also has Common Language Runtime (CLR) as a closed environment.[60]

Many proprietary languages are widely used, in spite of their proprietary nature; examples include MATLAB, VBScript, and Wolfram Language. Some languages may make the transition from closed to open; for example, Erlang was originally Ericsson’s internal programming language.[61]

Use[edit]

Thousands of different programming languages have been created, mainly in the computing field.[62]
Individual software projects commonly use five programming languages or more.[63]

Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers «do exactly what they are told to do», and cannot «understand» what code the programmer intended to write. The combination of the language definition, a program, and the program’s inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by using pseudocode, which interleaves natural language with code written in a programming language.

A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives).[64] Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment.

Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the «commands» are simply programs, whose execution is chained together. When a language can run its commands through an interpreter (such as a Unix shell or other command-line interface), without compiling, it is called a scripting language.[65]

Measuring language usage[edit]

Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes;[66][67] Fortran in scientific and engineering applications; Ada in aerospace, transportation, military, real-time, and embedded applications; and C in embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications.

Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:

  • counting the number of job advertisements that mention the language[68]
  • the number of books sold that teach or describe the language[69]
  • estimates of the number of existing lines of code written in the language – which may underestimate languages not often found in public searches[70]
  • counts of language references (i.e., to the name of the language) found using a web search engine.

Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages (in descending order by overall popularity): Java, C, C++, Python, C#, JavaScript, VB .NET, R, PHP, and MATLAB.[71]

Dialects, flavors and implementations[edit]

A dialect of a programming language or a data exchange language is a (relatively small) variation or extension of the language that does not change its intrinsic nature. With languages such as Scheme and Forth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a new dialect. In other cases, a dialect is created for use in a domain-specific language, often a subset. In the Lisp world, most languages that use basic S-expression syntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly, as do, say, Racket and Clojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. The BASIC programming language has many dialects.

Taxonomies[edit]

There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.

The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.[72]

In broad strokes, programming languages are classified by programming paradigm and intended domain of use, with general-purpose programming languages distinguished from domain-specific programming languages. Traditionally, programming languages have been regarded as describing computation in terms of imperative sentences, i.e. issuing commands. These are generally called imperative programming languages. A great deal of research in programming languages has been aimed at blurring the distinction between a program as a set of instructions and a program as an assertion about the desired answer, which is the main feature of declarative programming.[73] More refined paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these).[74] Some general purpose languages were designed largely with educational goals.[75]

A programming language may also be classified by factors unrelated to the programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being deliberately esoteric or not.

See also[edit]

  • Comparison of programming languages (basic instructions)
  • Comparison of programming languages
  • Computer programming
  • Computer science and Outline of computer science
  • Domain-specific language
  • Domain-specific modeling
  • Educational programming language
  • Esoteric programming language
  • Extensible programming
  • Category:Extensible syntax programming languages
  • Invariant-based programming
  • List of BASIC dialects
  • Lists of programming languages
  • List of programming language researchers
  • Programming languages used in most popular websites
  • Language-oriented programming
  • Logic programming
  • Literate programming
  • Metaprogramming
    • Ruby (programming language) § Metaprogramming
  • Modeling language
  • Programming language theory
  • Pseudocode
  • Rebol § Dialects
  • Reflection
  • Scientific programming language
  • Scripting language
  • Software engineering and List of software engineering topics

References[edit]

  1. ^ a b c Aaby, Anthony (2004). Introduction to Programming Languages. Archived from the original on 8 November 2012. Retrieved 29 September 2012.
  2. ^ Robert A. Edmunds, The Prentice-Hall standard glossary of computer terminology, Prentice-Hall, 1985, p. 91
  3. ^ Pascal Lando, Anne Lapujade, Gilles Kassel, and Frédéric Fürst, Towards a General Ontology of Computer Programs Archived 7 July 2015 at the Wayback Machine, ICSOFT 2007 Archived 27 April 2010 at the Wayback Machine, pp. 163–170
  4. ^ S.K. Bajpai, Introduction To Computers And C Programming, New Age International, 2007, ISBN 81-224-1379-X, p. 346
  5. ^ «Turing Completeness». www.cs.odu.edu. Retrieved 5 October 2022.
  6. ^ Digital Equipment Corporation. «Information Technology – Database Language SQL (Proposed revised text of DIS 9075)». ISO/IEC 9075:1992, Database Language SQL. Archived from the original on 21 June 2006. Retrieved 29 June 2006.
  7. ^ The Charity Development Group (December 1996). «The CHARITY Home Page». Archived from the original on 18 July 2006., «Charity is a categorical programming language…», «All Charity computations terminate.»
  8. ^ In mathematical terms, this means the programming language is Turing-complete MacLennan, Bruce J. (1987). Principles of Programming Languages. Oxford University Press. p. 1. ISBN 978-0-19-511306-8.
  9. ^ R. Narasimhan, Programming Languages and Computers: A Unified Metatheory, pp. 189—247 in Franz Alt, Morris Rubinoff (eds.) Advances in computers, Volume 8, Academic Press, 1994, ISBN 0-12-012108-5, p.215: «[…] the model […] for computer languages differs from that […] for programming languages in only two respects. In a computer language, there are only finitely many names—or registers—which can assume only finitely many values—or states—and these states are not further distinguished in terms of any other attributes. [author’s footnote:] This may sound like a truism but its implications are far-reaching. For example, it would imply that any model for programming languages, by fixing certain of its parameters or features, should be reducible in a natural way to a model for computer languages.»
  10. ^ John C. Reynolds, «Some thoughts on teaching programming and programming languages», SIGPLAN Notices, Volume 43, Issue 11, November 2008, p.109
  11. ^ Ben Ari, Mordechai (1996). Understanding Programming Languages. John Wiley and Sons. Programs and languages can be defined as purely formal mathematical objects. However, more people are interested in programs than in other mathematical objects such as groups, precisely because it is possible to use the program—the sequence of symbols—to control the execution of a computer. While we highly recommend the study of the theory of programming, this text will generally limit itself to the study of programs as they are executed on a computer.
  12. ^ XML in 10 points Archived 6 September 2009 at the Wayback Machine W3C, 1999, «XML is not a programming language.»
  13. ^ Powell, Thomas (2003). HTML & XHTML: the complete reference. McGraw-Hill. p. 25. ISBN 978-0-07-222942-4. HTML is not a programming language.
  14. ^ Dykes, Lucinda; Tittel, Ed (2005). XML For Dummies (4th ed.). Wiley. p. 20. ISBN 978-0-7645-8845-7. …it’s a markup language, not a programming language.
  15. ^ «What kind of language is XSLT?». IBM.com. 20 April 2005. Archived from the original on 11 May 2011.
  16. ^ «XSLT is a Programming Language». Msdn.microsoft.com. Archived from the original on 3 February 2011. Retrieved 3 December 2010.
  17. ^ Scott, Michael (2006). Programming Language Pragmatics. Morgan Kaufmann. p. 802. ISBN 978-0-12-633951-2. XSLT, though highly specialized to the transformation of XML, is a Turing-complete programming language.
  18. ^ Oetiker, Tobias; Partl, Hubert; Hyna, Irene; Schlegl, Elisabeth (20 June 2016). «The Not So Short Introduction to LATEX 2ε» (Version 5.06). tobi.oetiker.ch. pp. 1–157. Archived (PDF) from the original on 14 March 2017.
  19. ^ Syropoulos, Apostolos; Antonis Tsolomitis; Nick Sofroniou (2003). Digital typography using LaTeX. Springer-Verlag. p. 213. ISBN 978-0-387-95217-8. TeX is not only an excellent typesetting engine but also a real programming language.
  20. ^ David A. Schmidt, The structure of typed programming languages, MIT Press, 1994, ISBN 0-262-19349-3, p. 32
  21. ^ Pierce, Benjamin (2002). Types and Programming Languages. MIT Press. p. 339. ISBN 978-0-262-16209-8.
  22. ^
    Rojas, Raúl, et al. (2000). «Plankalkül: The First High-Level Programming Language and its Implementation». Institut für Informatik, Freie Universität Berlin, Technical Report B-3/2000. (full text) Archived 18 October 2014 at the Wayback Machine
  23. ^ Sebesta, W.S Concepts of Programming languages. 2006; M6 14:18 pp.44. ISBN 0-321-33025-0
  24. ^ Knuth, Donald E.; Pardo, Luis Trabb. «Early development of programming languages». Encyclopedia of Computer Science and Technology. 7: 419–493.
  25. ^ Peter J. Bentley (2012). Digitized: The Science of Computers and how it Shapes Our World. Oxford University Press. p. 87. ISBN 9780199693795. Archived from the original on 29 August 2016.
  26. ^ «Fortran creator John Backus dies – Tech and gadgets». NBC News. 20 March 2007. Retrieved 25 April 2010.
  27. ^ «CSC-302 99S : Class 02: A Brief History of Programming Languages». Math.grin.edu. Archived from the original on 15 July 2010. Retrieved 25 April 2010.
  28. ^ Eugene Loh (18 June 2010). «The Ideal HPC Programming Language». Queue. 8 (6). Archived from the original on 4 March 2016.
  29. ^ «HPL – A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers». Archived from the original on 15 February 2015. Retrieved 21 February 2015.
  30. ^ Hopper (1978) p. 16.
  31. ^ Sammet (1969) p. 316
  32. ^ Sammet (1978) p. 204.
  33. ^ Richard L. Wexelblat: History of Programming Languages, Academic Press, 1981, chapter XIV.
  34. ^ François Labelle. «Programming Language Usage Graph». SourceForge. Archived from the original on 17 June 2006. Retrieved 21 June 2006.. This comparison analyzes trends in the number of projects hosted by a popular community programming repository. During most years of the comparison, C leads by a considerable margin; in 2006, Java overtakes C, but the combination of C/C++ still leads considerably.
  35. ^ Hayes, Brian (2006). «The Semicolon Wars». American Scientist. 94 (4): 299–303. doi:10.1511/2006.60.299.
  36. ^ Dijkstra, Edsger W. (March 1968). «Go To Statement Considered Harmful» (PDF). Communications of the ACM. 11 (3): 147–148. doi:10.1145/362929.362947. S2CID 17469809. Archived (PDF) from the original on 13 May 2014.
  37. ^ Tetsuro Fujise, Takashi Chikayama, Kazuaki Rokusawa, Akihiko Nakase (December 1994). «KLIC: A Portable Implementation of KL1» Proc. of FGCS ’94, ICOT Tokyo, December 1994. «Archived copy». Archived from the original on 25 September 2006. Retrieved 9 October 2006.{{cite web}}: CS1 maint: archived copy as title (link) KLIC is a portable implementation of a concurrent logic programming language KL1.
  38. ^ Jim Bender (15 March 2004). «Mini-Bibliography on Modules for Functional Programming Languages». ReadScheme.org. Archived from the original on 24 September 2006.
  39. ^ Michael Sipser (1996). Introduction to the Theory of Computation. PWS Publishing. ISBN 978-0-534-94728-6. Section 2.2: Pushdown Automata, pp.101–114.
  40. ^ Jeffrey Kegler, «Perl and Undecidability Archived 17 August 2009 at the Wayback Machine», The Perl Review. Papers 2 and 3 prove, using respectively Rice’s theorem and direct reduction to the halting problem, that the parsing of Perl programs is in general undecidable.
  41. ^ Marty Hall, 1995, Lecture Notes: Macros Archived 6 August 2013 at the Wayback Machine, PostScript version Archived 17 August 2000 at the Wayback Machine
  42. ^ Michael Lee Scott, Programming language pragmatics, Edition 2, Morgan Kaufmann, 2006, ISBN 0-12-633951-1, p. 18–19
  43. ^ a b c d e f g Andrew Cooke. «Introduction To Computer Languages». Archived from the original on 15 August 2012. Retrieved 13 July 2012.
  44. ^ Leivant, Daniel (1983). Polymorphic type inference. ACM SIGACT-SIGPLAN symposium on Principles of programming languages. Austin, Texas: ACM Press. pp. 88–98. doi:10.1145/567067.567077. ISBN 978-0-89791-090-3.
  45. ^ Specifically, instantiations of generic types are inferred for certain expression forms. Type inference in Generic Java—the research language that provided the basis for Java 1.5’s bounded parametric polymorphism extensions—is discussed in two informal manuscripts from the Types mailing list: Generic Java type inference is unsound Archived 29 January 2007 at the Wayback Machine (Alan Jeffrey, 17 December 2001) and Sound Generic Java type inference Archived 29 January 2007 at the Wayback Machine (Martin Odersky, 15 January 2002). C#’s type system is similar to Java’s and uses a similar partial type inference scheme.
  46. ^ «Revised Report on the Algorithmic Language Scheme». 20 February 1998. Archived from the original on 14 July 2006.
  47. ^ Luca Cardelli and Peter Wegner. «On Understanding Types, Data Abstraction, and Polymorphism». Manuscript (1985). Archived from the original on 19 June 2006.
  48. ^ Steven R. Fischer, A history of language, Reaktion Books, 2003, ISBN 1-86189-080-X, p. 205
  49. ^ Éric Lévénez (2011). «Computer Languages History». Archived from the original on 7 January 2006.
  50. ^ Jing Huang. «Artificial Language vs. Natural Language». Archived from the original on 3 September 2009.
  51. ^ IBM in first publishing PL/I, for example, rather ambitiously titled its manual The universal programming language PL/I (IBM Library; 1966). The title reflected IBM’s goals for unlimited subsetting capability: «PL/I is designed in such a way that one can isolate subsets from it satisfying the requirements of particular applications.» («PL/I». Encyclopedia of Mathematics. Archived from the original on 26 April 2012. Retrieved 29 June 2006.). Ada and UNCOL had similar early goals.
  52. ^ Frederick P. Brooks, Jr.: The Mythical Man-Month, Addison-Wesley, 1982, pp. 93–94
  53. ^ Dijkstra, Edsger W. On the foolishness of «natural language programming.» Archived 20 January 2008 at the Wayback Machine EWD667.
  54. ^ Perlis, Alan (September 1982). «Epigrams on Programming». SIGPLAN Notices Vol. 17, No. 9. pp. 7–13. Archived from the original on 17 January 1999.
  55. ^ Milner, R.; M. Tofte; R. Harper; D. MacQueen (1997). The Definition of Standard ML (Revised). MIT Press. ISBN 978-0-262-63181-5.
  56. ^ Kelsey, Richard; William Clinger; Jonathan Rees (February 1998). «Section 7.2 Formal semantics». Revised5 Report on the Algorithmic Language Scheme. Archived from the original on 6 July 2006.
  57. ^ ANSI – Programming Language Rexx, X3-274.1996
  58. ^ Steve, McConnell (2004). Code complete (Second ed.). Redmond, Washington. pp. 590, 600. ISBN 0735619670. OCLC 54974573.
  59. ^ See: Oracle America, Inc. v. Google, Inc.
  60. ^ «Guide to Programming Languages | ComputerScience.org». ComputerScience.org. Retrieved 13 May 2018.
  61. ^ «The basics». ibm.com. 10 May 2011. Retrieved 13 May 2018.
  62. ^ «HOPL: an interactive Roster of Programming Languages». Australia: Murdoch University. Archived from the original on 20 February 2011. Retrieved 1 June 2009. This site lists 8512 languages.
  63. ^ Mayer, Philip; Bauer, Alexander (2015). «An empirical analysis of the utilization of multiple programming languages in open source projects». Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering. Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering – EASE ’15. New York, NY, USA: ACM. pp. 4:1–4:10. doi:10.1145/2745802.2745805. ISBN 978-1-4503-3350-4. Results: We found (a) a mean number of 5 languages per project with a clearly dominant main general-purpose language and 5 often-used DSL types, (b) a significant influence of the size, number of commits, and the main language on the number of languages as well as no significant influence of age and number of contributors, and (c) three language ecosystems grouped around XML, Shell/Make, and HTML/CSS. Conclusions: Multi-language programming seems to be common in open-source projects and is a factor that must be dealt with in tooling and when assessing the development and maintenance of such software systems.
  64. ^ Abelson, Sussman, and Sussman. «Structure and Interpretation of Computer Programs». Archived from the original on 26 February 2009. Retrieved 3 March 2009.{{cite web}}: CS1 maint: multiple names: authors list (link)
  65. ^ Brown Vicki (1999). «Scripting Languages». mactech.com. Archived from the original on 2 December 2017.
  66. ^ Georgina Swan (21 September 2009). «COBOL turns 50». computerworld.com.au. Archived from the original on 19 October 2013. Retrieved 19 October 2013.
  67. ^ Ed Airey (3 May 2012). «7 Myths of COBOL Debunked». developer.com. Archived from the original on 19 October 2013. Retrieved 19 October 2013.
  68. ^ Nicholas Enticknap. «SSL/Computer Weekly IT salary survey: finance boom drives IT job growth». Computer Weekly. Archived from the original on 26 October 2011. Retrieved 14 June 2013.
  69. ^ «Counting programming languages by book sales». Radar.oreilly.com. 2 August 2006. Archived from the original on 17 May 2008.
  70. ^ Bieman, J.M.; Murdock, V., Finding code on the World Wide Web: a preliminary investigation, Proceedings First IEEE International Workshop on Source Code Analysis and Manipulation, 2001
  71. ^ «Most Popular and Influential Programming Languages of 2018». stackify.com. 18 December 2017. Retrieved 29 August 2018.
  72. ^ «Fluent Python 2nd edition». Thoughtworks. Retrieved 11 October 2022.
  73. ^ Carl A. Gunter, Semantics of Programming Languages: Structures and Techniques, MIT Press, 1992, ISBN 0-262-57095-5, p. 1
  74. ^ «TUNES: Programming Languages». Archived from the original on 20 October 2007.
  75. ^ Wirth, Niklaus (1993). «Recollections about the development of Pascal». The second ACM SIGPLAN conference on History of programming languages – HOPL-II. Proc. 2nd ACM SIGPLAN Conference on History of Programming Languages. Vol. 28. pp. 333–342. CiteSeerX 10.1.1.475.6989. doi:10.1145/154766.155378. ISBN 978-0-89791-570-0. S2CID 9783524.

Further reading[edit]

  • Abelson, Harold; Sussman, Gerald Jay (1996). Structure and Interpretation of Computer Programs (2nd ed.). MIT Press. Archived from the original on 9 March 2018.
  • Raphael Finkel: Advanced Programming Language Design, Addison Wesley 1995.
  • Daniel P. Friedman, Mitchell Wand, Christopher T. Haynes: Essentials of Programming Languages, The MIT Press 2001.
  • Maurizio Gabbrielli and Simone Martini: «Programming Languages: Principles and Paradigms», Springer, 2010.
  • David Gelernter, Suresh Jagannathan: Programming Linguistics, The MIT Press 1990.
  • Ellis Horowitz (ed.): Programming Languages, a Grand Tour (3rd ed.), 1987.
  • Ellis Horowitz: Fundamentals of Programming Languages, 1989.
  • Shriram Krishnamurthi: Programming Languages: Application and Interpretation, online publication.
  • Bruce J. MacLennan: Principles of Programming Languages: Design, Evaluation, and Implementation, Oxford University Press 1999.
  • John C. Mitchell: Concepts in Programming Languages, Cambridge University Press 2002.
  • Benjamin C. Pierce: Types and Programming Languages, The MIT Press 2002.
  • Terrence W. Pratt and Marvin Victor Zelkowitz: Programming Languages: Design and Implementation (4th ed.), Prentice Hall 2000.
  • Peter H. Salus. Handbook of Programming Languages (4 vols.). Macmillan 1998.
  • Ravi Sethi: Programming Languages: Concepts and Constructs, 2nd ed., Addison-Wesley 1996.
  • Michael L. Scott: Programming Language Pragmatics, Morgan Kaufmann Publishers 2005.
  • Robert W. Sebesta: Concepts of Programming Languages, 9th ed., Addison Wesley 2009.
  • Franklyn Turbak and David Gifford with Mark Sheldon: Design Concepts in Programming Languages, The MIT Press 2009.
  • Peter Van Roy and Seif Haridi. Concepts, Techniques, and Models of Computer Programming, The MIT Press 2004.
  • David A. Watt. Programming Language Concepts and Paradigms. Prentice Hall 1990.
  • David A. Watt and Muffy Thomas. Programming Language Syntax and Semantics. Prentice Hall 1991.
  • David A. Watt. Programming Language Processors. Prentice Hall 1993.
  • David A. Watt. Programming Language Design Concepts. John Wiley & Sons 2004.

Examining the list of shared libraries (DLLs in Windows-speak) of a compiled program can give a clue, because typically each language has a specific distinctive library to provide the runtime environment.

For example, on my Linux PC, running the ldd command on an executable produced the following tell-tale output:

ldd *redacted*
    linux-gate.so.1 =>  (0x0042e000)
    libxerces-c.so.28 => /usr/local/lib/libxerces-c.so.28 (0x004b0000)
    *redacted*.so => not found
    libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x05f28000)
    libm.so.6 => /lib/libm.so.6 (0x00a61000)
    libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x05be0000)
    libc.so.6 => /lib/libc.so.6 (0x00906000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x00a93000)
    /lib/ld-linux.so.2 (0x008e7000)

The use of libc.so suggests C or C++. The use of libstdc++.so suggests C++. In fact that was a C++ program.

Searching the program executable for human readable strings can also given clues, especially if it has debugging information present.

For example, running the strings command on that same executable revealled (among much other text) the following tell-tale strings:

virtual std::ostream* XmlToDcs::AmsResultsHandler::createOutputFileIfPossible()
pointer != static_cast< unsigned int >(-1)
std::ofstream* XmlToDcs::IndexElementsHandler::createStatementsFile(const tm&, char, char, unsigned int)
EamResultsHandler.cpp

The first three look like fragments of C++, the last looks like the name of a C++ source code file.

A programming language is a notation for writing

programs

, which are specifications of a computation or

algorithm

.

[2]

Some authors restrict the term «programming language» to those languages that can express all possible algorithms.

[2]

[3]

Traits often considered important for what constitutes a programming language include:Function and targetA computer programming language is a

language

used to write

computer programs

, which involves a

computer

performing some kind of computation

[4]

or

algorithm

and possibly control external devices such as

printers

,

disk drives

,

robots

,

[5]

and so on. For example,

PostScript

programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language.

[6]

In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way.

[7]

Programming languages differ from

natural languages

in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.AbstractionsProgramming languages usually contain

abstractions

for defining and manipulating

data structures

or controlling the

flow of execution

. The practical necessity that a programming language support adequate abstractions is expressed by the

abstraction principle

.

[8]

This principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions.

[9]

Expressive powerThe

theory of computation

classifies languages by the computations they are capable of expressing. All

Turing-complete

languages can implement the same set of

algorithms

.

ANSI/ISO SQL-92

and

Charity

are examples of languages that are not Turing complete, yet are often called programming languages.

[10]

[11]

In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits[a] in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.

The size of a word is reflected in many aspects of a computer’s structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, «hardware word» means the full-sized natural word of the processor, as opposed to any other definition used).

Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (KW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (KW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes.

Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits.[1] Special-purpose designs like digital signal processors, may have any word length from 4 to 80 bits.[1]

The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).

Uses of wordsEdit

Depending on how a computer is organized, word-size units may be used for:

Fixed-point numbers
Holders for fixed point, usually integer, numerical values may be available in one or in several different sizes, but one of the sizes available will almost always be the word. The other sizes, if any, are likely to be multiples or fractions of the word size. The smaller sizes are normally used only for efficient use of memory; when loaded into the processor, their values usually go into a larger, word sized holder.
Floating-point numbers
Holders for floating-point numerical values are typically either a word or a multiple of a word.
Addresses
Holders for memory addresses must be of a size capable of expressing the needed range of values but not be excessively large, so often the size used is the word though it can also be a multiple or fraction of the word size.
Registers
Processor registers are designed with a size appropriate for the type of data they hold, e.g. integers, floating-point numbers, or addresses. Many computer architectures use general-purpose registers that are capable of storing data in multiple representations.
Memory–processor transfer
When the processor reads from the memory subsystem into a register or writes a register’s value to memory, the amount of data transferred is often a word. Historically, this amount of bits which could be transferred in one cycle was also called a catena in some environments (such as the Bull GAMMA 60 [fr]).[2][3] In simple memory subsystems, the word is transferred over the memory data bus, which typically has a width of a word or half-word. In memory subsystems that use caches, the word-sized transfer is the one between the processor and the first level of cache; at lower levels of the memory hierarchy larger transfers (which are a multiple of the word size) are normally used.
Unit of address resolution
In a given architecture, successive address values designate successive units of memory; this unit is the unit of address resolution. In most computers, the unit is either a character (e.g. a byte) or a word. (A few computers have used bit resolution.) If the unit is a word, then a larger amount of memory can be accessed using an address of a given size at the cost of added complexity to access individual characters. On the other hand, if the unit is a byte, then individual characters can be addressed (i.e. selected during the memory operation).
Instructions
Machine instructions are normally the size of the architecture’s word, such as in RISC architectures, or a multiple of the «char» size that is a fraction of it. This is a natural choice since instructions and data usually share the same memory subsystem. In Harvard architectures the word sizes of instructions and data need not be related, as instructions and data are stored in different memories; for example, the processor in the 1ESS electronic telephone switch has 37-bit instructions and 23-bit data words.

Word size choiceEdit

When a computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture.

Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format.

After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, a byte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.

Variable-word architecturesEdit

Early machine designs included some that used what is often termed a variable word length. In this type of organization, an operand has no fixed length. Depending on the machine and the instruction, the length might be denoted by a count field, by a delimiting character, or by an additional bit called, e.g., flag, or word mark. Such machines often use binary-coded decimal in 4-bit digits, or in 6-bit characters, for numbers. This class of machines includes the IBM 702, IBM 705, IBM 7080, IBM 7010, UNIVAC 1050, IBM 1401, IBM 1620, and RCA 301.

Most of these machines work on one unit of memory at a time and since each instruction or datum is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on an IBM 1620 Model I take 8 cycles (160 μs) just to read the 12 digits of the instruction (the Model II reduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution takes a variable number of cycles, depending on the size of the operands.

Word, bit and byte addressingEdit

The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, the word-addressable machine approach, address values which differ by one designate adjacent memory words. This is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions.

When byte processing is to be a significant part of the workload, it is usually more advantageous to use the byte, rather than the word, as the unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach was used in the IBM 360, and has been the most common approach in machines designed since then.

When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on the IBM 7030[4] («Stretch»), a floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits.

In a byte-addressable machine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented (byte-addressable) machine without SS instructions, moving a single byte from one arbitrary location to another is typically:

  1. LOAD the source byte
  2. STORE the result back in the target byte

Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following:

  1. LOAD the word containing the source byte
  2. SHIFT the source word to align the desired byte to the correct position in the target word
  3. AND the source word with a mask to zero out all but the desired bits
  4. LOAD the word containing the target byte
  5. AND the target word with a mask to zero out the target byte
  6. OR the registers containing the source and target words to insert the source byte
  7. STORE the result back in the target location

Alternatively many word-oriented machines implement byte operations with instructions using special byte pointers in registers or memory. For example, the PDP-10 byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations.

Powers of twoEdit

Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte.

Size familiesEdit

As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintain backward compatibility while extending processor capability. As a result, what might have been the central word size in a fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family.

In the mid-1970s, DEC designed the VAX to be a 32-bit successor of the 16-bit PDP-11. They used word for a 16-bit quantity, while longword referred to a 32-bit quantity; this terminology is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called a word, while a quantity that is one half a word would be called a halfword. In fitting with this scheme, a VAX quadword is 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bit Alpha.

Another example is the x86 family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, while word continues to designate a 16-bit quantity. As software is routinely ported from one word-length to the next, some APIs and documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs, a shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft’s Windows API maintains the programming language definition of WORD as 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as:

  • WORD (16 bits/2 bytes)
  • DWORD (32 bits/4 bytes)
  • QWORD (64 bits/8 bytes)

A similar phenomenon has developed in Intel’s x86 assembly language – because of the support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry «d» or «q» identifiers denoting «double-«, «quad-» or «double-quad-«, which are in terms of the architecture’s original 16-bit word size.

An example with a different word size is the IBM System/360 family. In the System/360 architecture, System/370 architecture and System/390 architecture, there are 8-bit bytes, 16-bit halfwords, 32-bit words and 64-bit doublewords. The z/Architecture, which is the 64-bit member of that architecture family, continues to refer to 16-bit halfwords, 32-bit words, and 64-bit doublewords, and additionally features 128-bit quadwords.

In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor.

Often carefully written source code – written with source-code compatibility and software portability in mind – can be recompiled to run on a variety of processors, even ones with different data word lengths or different address widths or both.

Table of word sizesEdit

key: bit: bits, c: characters, d: decimal digits, w: word size of architecture, n: variable size, wm: Word mark
Year Computer
architecture
Word size w Integer
sizes
Floating­point
sizes
Instruction
sizes
Unit of address
resolution
Char size
1837 Babbage
Analytical engine
50 d w Five different cards were used for different functions, exact size of cards not known. w
1941 Zuse Z3 22 bit w 8 bit w
1942 ABC 50 bit w
1944 Harvard Mark I 23 d w 24 bit
1946
(1948)
{1953}
ENIAC
(w/Panel #16[5])
{w/Panel #26[6]}
10 d w, 2w
(w)
{w}

(2 d, 4 d, 6 d, 8 d)
{2 d, 4 d, 6 d, 8 d}


{w}
1948 Manchester Baby 32 bit w w w
1951 UNIVAC I 12 d w 12w w 1 d
1952 IAS machine 40 bit w 12w w 5 bit
1952 Fast Universal Digital Computer M-2 34 bit w? w 34 bit = 4-bit opcode plus 3×10 bit address 10 bit
1952 IBM 701 36 bit 12w, w 12w 12w, w 6 bit
1952 UNIVAC 60 n d 1 d, … 10 d 2 d, 3 d
1952 ARRA I 30 bit w w w 5 bit
1953 IBM 702 n c 0 c, … 511 c 5 c c 6 bit
1953 UNIVAC 120 n d 1 d, … 10 d 2 d, 3 d
1953 ARRA II 30 bit w 2w 12w w 5 bit
1954
(1955)
IBM 650
(w/IBM 653)
10 d w
(w)
w w 2 d
1954 IBM 704 36 bit w w w w 6 bit
1954 IBM 705 n c 0 c, … 255 c 5 c c 6 bit
1954 IBM NORC 16 d w w, 2w w w
1956 IBM 305 n d 1 d, … 100 d 10 d d 1 d
1956 ARMAC 34 bit w w 12w w 5 bit, 6 bit
1956 LGP-30 31 bit w 16 bit w 6 bit
1957 Autonetics Recomp I 40 bit w, 79 bit, 8 d, 15 d 12w 12w, w 5 bit
1958 UNIVAC II 12 d w 12w w 1 d
1958 SAGE 32 bit 12w w w 6 bit
1958 Autonetics Recomp II 40 bit w, 79 bit, 8 d, 15 d 2w 12w 12w, w 5 bit
1958 Setun 6 trit (~9.5 bits)[b] up to 6 tryte up to 3 trytes 4 trit?
1958 Electrologica X1 27 bit w 2w w w 5 bit, 6 bit
1959 IBM 1401 n c 1 c, … 1 c, 2 c, 4 c, 5 c, 7 c, 8 c c 6 bit + wm
1959
(TBD)
IBM 1620 n d 2 d, …
(4 d, … 102 d)
12 d d 2 d
1960 LARC 12 d w, 2w w, 2w w w 2 d
1960 CDC 1604 48 bit w w 12w w 6 bit
1960 IBM 1410 n c 1 c, … 1 c, 2 c, 6 c, 7 c, 11 c, 12 c c 6 bit + wm
1960 IBM 7070 10 d[c] w, 1-9 d w w w, d 2 d
1960 PDP-1 18 bit w w w 6 bit
1960 Elliott 803 39 bit
1961 IBM 7030
(Stretch)
64 bit 1 bit, … 64 bit,
1 d, … 16 d
w 12w, w bit (integer),
12w (branch),
w (float)
1 bit, … 8 bit
1961 IBM 7080 n c 0 c, … 255 c 5 c c 6 bit
1962 GE-6xx 36 bit w, 2 w w, 2 w, 80 bit w w 6 bit, 9 bit
1962 UNIVAC III 25 bit w, 2w, 3w, 4w, 6 d, 12 d w w 6 bit
1962 Autonetics D-17B
Minuteman I Guidance Computer
27 bit 11 bit, 24 bit 24 bit w
1962 UNIVAC 1107 36 bit 16w, 13w, 12w, w w w w 6 bit
1962 IBM 7010 n c 1 c, … 1 c, 2 c, 6 c, 7 c, 11 c, 12 c c 6 b + wm
1962 IBM 7094 36 bit w w, 2w w w 6 bit
1962 SDS 9 Series 24 bit w 2w w w
1963
(1966)
Apollo Guidance Computer 15 bit w w, 2w w
1963 Saturn Launch Vehicle Digital Computer 26 bit w 13 bit w
1964/1966 PDP-6/PDP-10 36 bit w w, 2 w w w 6 bit
7 bit (typical)
9 bit
1964 Titan 48 bit w w w w w
1964 CDC 6600 60 bit w w 14w, 12w w 6 bit
1964 Autonetics D-37C
Minuteman II Guidance Computer
27 bit 11 bit, 24 bit 24 bit w 4 bit, 5 bit
1965 Gemini Guidance Computer 39 bit 26 bit 13 bit 13 bit, 26 —bit
1965 IBM 1130 16 bit w, 2w 2w, 3w w, 2w w 8 bit
1965 IBM System/360 32 bit 12w, w,
1 d, … 16 d
w, 2w 12w, w, 112w 8 bit 8 bit
1965 UNIVAC 1108 36 bit 16w, 14w, 13w, 12w, w, 2w w, 2w w w 6 bit, 9 bit
1965 PDP-8 12 bit w w w 8 bit
1965 Electrologica X8 27 bit w 2w w w 6 bit, 7 bit
1966 SDS Sigma 7 32 bit 12w, w w, 2w w 8 bit 8 bit
1969 Four-Phase Systems AL1 8 bit w ? ? ?
1970 MP944 20 bit w ? ? ?
1970 PDP-11 16 bit w 2w, 4w w, 2w, 3w 8 bit 8 bit
1971 CDC STAR-100 64 bit 12w, w 12w, w 12w, w bit 8 bit
1971 TMS1802NC 4 bit w ? ?
1971 Intel 4004 4 bit w, d 2w, 4w w
1972 Intel 8008 8 bit w, 2 d w, 2w, 3w w 8 bit
1972 Calcomp 900 9 bit w w, 2w w 8 bit
1974 Intel 8080 8 bit w, 2w, 2 d w, 2w, 3w w 8 bit
1975 ILLIAC IV 64 bit w w, 12w w w
1975 Motorola 6800 8 bit w, 2 d w, 2w, 3w w 8 bit
1975 MOS Tech. 6501
MOS Tech. 6502
8 bit w, 2 d w, 2w, 3w w 8 bit
1976 Cray-1 64 bit 24 bit, w w 14w, 12w w 8 bit
1976 Zilog Z80 8 bit w, 2w, 2 d w, 2w, 3w, 4w, 5w w 8 bit
1978
(1980)
16-bit x86 (Intel 8086)
(w/floating point: Intel 8087)
16 bit 12w, w, 2 d
(2w, 4w, 5w, 17 d)
12w, w, … 7w 8 bit 8 bit
1978 VAX 32 bit 14w, 12w, w, 1 d, … 31 d, 1 bit, … 32 bit w, 2w 14w, … 1414w 8 bit 8 bit
1979
(1984)
Motorola 68000 series
(w/floating point)
32 bit 14w, 12w, w, 2 d
(w, 2w, 212w)
12w, w, … 712w 8 bit 8 bit
1985 IA-32 (Intel 80386) (w/floating point) 32 bit 14w, 12w, w
(w, 2w, 80 bit)
8 bit, … 120 bit
14w … 334w
8 bit 8 bit
1985 ARMv1 32 bit 14w, w w 8 bit 8 bit
1985 MIPS I 32 bit 14w, 12w, w w, 2w w 8 bit 8 bit
1991 Cray C90 64 bit 32 bit, w w 14w, 12w, 48 bit w 8 bit
1992 Alpha 64 bit 8 bit, 14w, 12w, w 12w, w 12w 8 bit 8 bit
1992 PowerPC 32 bit 14w, 12w, w w, 2w w 8 bit 8 bit
1996 ARMv4
(w/Thumb)
32 bit 14w, 12w, w w
(12w, w)
8 bit 8 bit
2000 IBM z/Architecture
(w/vector facility)
64 bit 14w, 12w, w
1 d, … 31 d
12w, w, 2w 14w, 12w, 34w 8 bit 8 bit, UTF-16, UTF-32
2001 IA-64 64 bit 8 bit, 14w, 12w, w 12w, w 41 bit (in 128-bit bundles)[7] 8 bit 8 bit
2001 ARMv6
(w/VFP)
32 bit 8 bit, 12w, w
(w, 2w)
12w, w 8 bit 8 bit
2003 x86-64 64 bit 8 bit, 14w, 12w, w 12w, w, 80 bit 8 bit, … 120 bit 8 bit 8 bit
2013 ARMv8-A and ARMv9-A 64 bit 8 bit, 14w, 12w, w 12w, w 12w 8 bit 8 bit
Year Computer
architecture
Word size w Integer
sizes
Floating­point
sizes
Instruction
sizes
Unit of address
resolution
Char size
key: bit: bits, d: decimal digits, w: word size of architecture, n: variable size

[8][9]

See alsoEdit

  • Integer (computer science)

NotesEdit

  1. ^ Many early computers were decimal, and a few were ternary
  2. ^ The bit equivalent is computed by taking the amount of information entropy provided by the trit, which is  . This gives an equivalent of about 9.51 bits for 6 trits.
  3. ^ Three-state sign

ReferencesEdit

  1. ^ a b Beebe, Nelson H. F. (2017-08-22). «Chapter I. Integer arithmetic». The Mathematical-Function Computation Handbook — Programming Using the MathCW Portable Software Library (1 ed.). Salt Lake City, UT, USA: Springer International Publishing AG. p. 970. doi:10.1007/978-3-319-64110-2. ISBN 978-3-319-64109-6. LCCN 2017947446. S2CID 30244721.
  2. ^ Dreyfus, Phillippe (1958-05-08) [1958-05-06]. Written at Los Angeles, California, USA. System design of the Gamma 60 (PDF). Western Joint Computer Conference: Contrasts in Computers. ACM, New York, NY, USA. pp. 130–133. IRE-ACM-AIEE ’58 (Western). Archived (PDF) from the original on 2017-04-03. Retrieved 2017-04-03. […] Internal data code is used: Quantitative (numerical) data are coded in a 4-bit decimal code; qualitative (alpha-numerical) data are coded in a 6-bit alphanumerical code. The internal instruction code means that the instructions are coded in straight binary code.
    As to the internal information length, the information quantum is called a «catena,» and it is composed of 24 bits representing either 6 decimal digits, or 4 alphanumerical characters. This quantum must contain a multiple of 4 and 6 bits to represent a whole number of decimal or alphanumeric characters. Twenty-four bits was found to be a good compromise between the minimum 12 bits, which would lead to a too-low transfer flow from a parallel readout core memory, and 36 bits or more, which was judged as too large an information quantum. The catena is to be considered as the equivalent of a character in variable word length machines, but it cannot be called so, as it may contain several characters. It is transferred in series to and from the main memory.
    Not wanting to call a «quantum» a word, or a set of characters a letter, (a word is a word, and a quantum is something else), a new word was made, and it was called a «catena.» It is an English word and exists in Webster’s although it does not in French. Webster’s definition of the word catena is, «a connected series;» therefore, a 24-bit information item. The word catena will be used hereafter.
    The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60 [fr] is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. […]
  3. ^ Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962). «4: Natural Data Units» (PDF). In Buchholz, Werner (ed.). Planning a Computer System – Project Stretch. McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA. pp. 39–40. LCCN 61-10466. Archived (PDF) from the original on 2017-04-03. Retrieved 2017-04-03. […] Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below.
    Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
    A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 [fr] computer.)
    Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. […]
  4. ^ «Format» (PDF). Reference Manual 7030 Data Processing System (PDF). IBM. August 1961. pp. 50–57. Retrieved 2021-12-15.
  5. ^ Clippinger, Richard F. [in German] (1948-09-29). «A Logical Coding System Applied to the ENIAC (Electronic Numerical Integrator and Computer)». Aberdeen Proving Ground, Maryland, US: Ballistic Research Laboratories. Report No. 673; Project No. TB3-0007 of the Research and Development Division, Ordnance Department. Retrieved 2017-04-05.{{cite web}}: CS1 maint: url-status (link)
  6. ^ Clippinger, Richard F. [in German] (1948-09-29). «A Logical Coding System Applied to the ENIAC». Aberdeen Proving Ground, Maryland, US: Ballistic Research Laboratories. Section VIII: Modified ENIAC. Retrieved 2017-04-05.{{cite web}}: CS1 maint: url-status (link)
  7. ^ «4. Instruction Formats» (PDF). Intel Itanium Architecture Software Developer’s Manual. Vol. 3: Intel Itanium Instruction Set Reference. p. 3:293. Retrieved 2022-04-25. Three instructions are grouped together into 128-bit sized and aligned containers called bundles. Each bundle contains three 41-bit instruction slots and a 5-bit template field.
  8. ^ Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips (1997). Computer Architecture: Concepts and Evolution (1 ed.). Addison-Wesley. ISBN 0-201-10557-8. (1213 pages) (NB. This is a single-volume edition. This work was also available in a two-volume version.)
  9. ^ Ralston, Anthony; Reilly, Edwin D. (1993). Encyclopedia of Computer Science (3rd ed.). Van Nostrand Reinhold. ISBN 0-442-27679-6.

Понравилась статья? Поделить с друзьями:
  • What preposition do we use with the word midnight
  • What point of view uses the word we
  • What point of view is the word you
  • What planet it the hardest word
  • What parts of speech the word after