TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why do programming languages use the asterisk for multiplication?

2 pointsby pallas_athenaover 1 year ago

2 comments

elashriover 1 year ago
Mathematica (wolfram language) uses asterisk (<i>) for general multiplication (scalar x scalar or scalar x matrix). But use the dot (.) for matrix multiplication. So if you have defined A as constant (or scalar in general) and B is a matrix then<p>A</i>B will work A.B will give dimension error<p>If both A and B is matrices then A.B will give you matrix multiplication. A*B will give you a element wise multiplication.<p>This behavior is actually confusing first but if you work a lot with mathematics you will find it makes sense after sometime.
ggmover 1 year ago
Full stop used for multiplication in 7 bit ascii would mean full stop as decimal point could not be used safely.<p>x or X would have meant x or X could not be used as a variable name.<p>Functions would have worked fine. I suspect functions came after expressions when God invented imperative programs, when he invented lisp in the beginning was the bracket and it was good and he saw it was good. Functions came next.