Mathematical FunctionsΒΆ

NumPy provides a comprehensive library of mathematical functions that operate element-wise on arrays. From trigonometry and hyperbolic functions to rounding, exponents, logarithms, and arithmetic operations, these ufuncs (universal functions) are the building blocks for implementing mathematical formulas in vectorized form. Every ML loss function, activation function, and data transformation can be expressed using these primitives – making fluency with NumPy math functions essential for both understanding and implementing algorithms from scratch.

import numpy as np
np.__version__
__author__ = "kyubyong. kbpark.linguist@gmail.com. https://github.com/kyubyong"

Trigonometric FunctionsΒΆ

np.sin(), np.cos(), and np.tan() compute trigonometric values element-wise, while np.arcsin(), np.arccos(), and np.arctan() compute their inverses. np.degrees() and np.radians() convert between angle units. Trigonometric functions appear in signal processing, physics simulations, positional encodings in transformer models, and rotational transformations in computer vision.

Q1. Calculate sine, cosine, and tangent of x, element-wise.

x = np.array([0., 1., 30, 90])

Q2. Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.

x = np.array([-1., 0, 1.])

Q3. Convert angles from radians to degrees.

x = np.array([-np.pi, -np.pi/2, np.pi/2, np.pi])

Q4. Convert angles from degrees to radians.

x = np.array([-180.,  -90.,   90.,  180.])

Hyperbolic FunctionsΒΆ

np.sinh(), np.cosh(), and np.tanh() compute hyperbolic sine, cosine, and tangent respectively. The tanh function is particularly important in machine learning – it was historically one of the most popular activation functions in neural networks because it outputs values in the range [-1, 1] and is zero-centered, which helps with gradient flow during backpropagation.

Q5. Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.

x = np.array([-1., 0, 1.])

RoundingΒΆ

NumPy provides several rounding functions with different behaviors: np.around() rounds to the nearest even number (banker’s rounding), np.floor() rounds down, np.ceil() rounds up, and np.trunc() truncates toward zero. Understanding the differences matters for financial calculations, binning continuous data into discrete categories, and ensuring consistent behavior across platforms.

Q6. Predict the results of these, paying attention to the difference among the family functions.

x = np.array([2.1, 1.5, 2.5, 2.9, -2.1, -2.5, -2.9])

out1 = np.around(x)
out2 = np.floor(x)
out3 = np.ceil(x)
out4 = np.trunc(x)
out5 = [round(elem) for elem in x]

#print out1
#print out2
#print out3
#print out4
#print out5

Q7. Implement out5 in the above question using numpy.

Sums, Products, DifferencesΒΆ

These aggregate functions reduce arrays along specified axes. np.sum() and np.prod() compute sums and products. np.cumsum() and np.cumprod() compute cumulative versions. np.diff() computes differences between consecutive elements. np.min(), np.max(), and np.mean() extract summary statistics. The axis parameter controls which dimension is reduced, and keepdims=True preserves the original number of dimensions (useful for broadcasting). These operations are the foundation of data aggregation and statistical analysis.

Q8. Predict the results of these.

x = np.array(
    [[1, 2, 3, 4],
     [5, 6, 7, 8]])

outs = [np.sum(x),
        np.sum(x, axis=0),
        np.sum(x, axis=1, keepdims=True),
        "",
        np.prod(x),
        np.prod(x, axis=0),
        np.prod(x, axis=1, keepdims=True),
        "",
        np.cumsum(x),
        np.cumsum(x, axis=0),
        np.cumsum(x, axis=1),
        "",
        np.cumprod(x),
        np.cumprod(x, axis=0),
        np.cumprod(x, axis=1),
        "",
        np.min(x),
        np.min(x, axis=0),
        np.min(x, axis=1, keepdims=True),
        "",
        np.max(x),
        np.max(x, axis=0),
        np.max(x, axis=1, keepdims=True),
        "",
        np.mean(x),
        np.mean(x, axis=0),
        np.mean(x, axis=1, keepdims=True)]
           
for out in outs:
    if out == "":
        pass
        #print
    else:
        pass
        #print("->", out)

Q9. Calculate the difference between neighboring elements, element-wise.

x = np.array([1, 2, 4, 7, 0])

Q10. Calculate the difference between neighboring elements, element-wise, and prepend [0, 0] and append[100] to it.

x = np.array([1, 2, 4, 7, 0])

Q11. Return the cross product of x and y.

x = np.array([1, 2, 3])
y = np.array([4, 5, 6])

Exponents and LogarithmsΒΆ

np.exp() computes e^x, np.exp2() computes 2^x, and np.expm1() computes e^x - 1 with better precision for small x. np.log(), np.log2(), and np.log10() compute logarithms in different bases, and np.log1p() computes log(1+x) with better precision for small x. These functions are ubiquitous in ML: the softmax function uses exp, cross-entropy loss uses log, information theory metrics use log2, and log1p/expm1 prevent numerical underflow in edge cases.

Q12. Compute \(e^x\), element-wise.

x = np.array([1., 2., 3.], np.float32)

Q13. Calculate exp(x) - 1 for all elements in x.

x = np.array([1., 2., 3.], np.float32)

Q14. Calculate \(2^p\) for all p in x.

x = np.array([1., 2., 3.], np.float32)

Q15. Compute natural, base 10, and base 2 logarithms of x element-wise.

x = np.array([1, np.e, np.e**2])

Q16. Compute the natural logarithm of one plus each element in x in floating-point accuracy.

x = np.array([1e-99, 1e-100])

Floating Point RoutinesΒΆ

np.signbit() tests whether the sign bit is set (negative), and np.copysign() copies the sign from one array to another. These low-level routines are useful for implementing custom mathematical functions that need to handle positive and negative values differently, and for ensuring consistent sign behavior in numerical algorithms.

Q17. Return element-wise True where signbit is set.

x = np.array([-3, -2, -1, 0, 1, 2, 3])

Q18. Change the sign of x to that of y, element-wise.

x = np.array([-1, 0, 1])
y = -1.1

Arithmetic OperationsΒΆ

np.add(), np.subtract(), np.multiply(), np.divide(), np.negative(), np.reciprocal(), np.power(), and np.mod() are the function equivalents of arithmetic operators (+, -, *, /, etc.). While the operator syntax is more common, the function forms are useful when you need to pass an operation as an argument to another function (e.g., np.ufunc.reduce()) or when you want explicit control over output arrays and casting rules.

Q19. Add x and y element-wise.

x = np.array([1, 2, 3])
y = np.array([-1, -2, -3])

Q20. Subtract y from x element-wise.

x = np.array([3, 4, 5])
y = np.array(3)

Q21. Multiply x by y element-wise.

x = np.array([3, 4, 5])
y = np.array([1, 0, -1])

Q22. Divide x by y element-wise in two different ways.

x = np.array([3., 4., 5.])
y = np.array([1., 2., 3.])

Q23. Compute numerical negative value of x, element-wise.

x = np.array([1, -1])

Q24. Compute the reciprocal of x, element-wise.

x = np.array([1., 2., .2])

Q25. Compute \(x^y\), element-wise.

x = np.array([[1, 2], [3, 4]])
y = np.array([[1, 2], [1, 2]])

Q26. Compute the remainder of x / y element-wise in two different ways.

x = np.array([-3, -2, -1, 1, 2, 3])
y = 2

MiscellaneousΒΆ

np.clip() constrains values to a range (useful for gradient clipping in neural networks and clamping pixel values). np.square() and np.sqrt() compute element-wise squares and square roots. np.absolute() computes absolute values, and np.sign() returns the sign of each element. These utility functions appear frequently in loss computation, data normalization, and custom activation functions.

Q27. If an element of x is smaller than 3, replace it with 3. And if an element of x is bigger than 7, replace it with 7.

x = np.arange(10)

Q28. Compute the square of x, element-wise.

x = np.array([1, 2, -1])

Q29. Compute square root of x element-wise.

x = np.array([1., 4., 9.])

Q30. Compute the absolute value of x.

x = np.array([[1, -1], [3, -3]])

Q31. Compute an element-wise indication of the sign of x, element-wise.

x = np.array([1, 3, 0, -1, -3])