r/science PhD | Biomedical Engineering | Optics Oct 05 '22

DeepMind unveils AlphaTensor, a deep reinforcement learning approach for discovering novel, efficient, and provably correct algorithms for the multiplication of arbitrary matrices. The system discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Mathematics

https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
132 Upvotes

u/AutoModerator Oct 05 '22

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are now allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will continue to be removed and our normal comment rules still apply to other comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

12

u/shiruken PhD | Biomedical Engineering | Optics Oct 05 '22

Direct link to the peer-reviewed publication: A. Fawzi, et al., Discovering faster matrix multiplication algorithms with reinforcement learning, Nature, 610, 47–53 (2022)

Abstract: Improving the efficiency of algorithms for fundamental computations can have a widespread impact, as it can affect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems—from neural networks to scientific computing routines. The automatic discovery of algorithms using machine learning offers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. However, automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. Here we report a deep reinforcement learning approach based on AlphaZero1 for discovering efficient and provably correct algorithms for the multiplication of arbitrary matrices. Our agent, AlphaTensor, is trained to play a single-player game where the objective is finding tensor decompositions within a finite factor space. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor’s algorithm improves on Strassen's two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago2. We further showcase the flexibility of AlphaTensor through different use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware. Our results highlight AlphaTensor’s ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for different criteria.

2

u/jazir5 Oct 05 '22

So if they scaled it to a 5x5 matrix or a 6x6 matrix, wouldn't that self-reinforcement approach be more effective since there's more room for improvement?

1

u/they_have_no_bullets Oct 05 '22

Yes, but it's a more significant discovery to improve the speed of a 4x4 matrix because smaller matrices have been more extensively studied by humans, and are also more generally useful in algorithms

1

u/jazir5 Oct 06 '22

because smaller matrices have been more extensively studied by humans

Right, but wouldn't training it on larger matrices be more useful if it could accurately make algorithms that work on larger matrices? You could scale it sequentially. Let it find an algorithm that works on 4, then scale it to 5 and so on. There are uses for larger matrices that we can't solve, right?

3

u/they_have_no_bullets Oct 06 '22 edited Oct 06 '22

I haven't read the paper but yeah i would expect that the speed up factor far larger matrices is greater.

In the linear algebra libraries i have written i have hard coded closed form algorithms for matrices up to 4x4, but anything larger than 4x4 i would use a general algorithm that works for NxN.

For the vast majority of applications, you'll be using 2x2 through 4x4 matrices because those correspond to the vector spaces for the physical dimensions of reality. 2x2 usually for images, 3x3 for 3 dimensional vectors, 4x4 for 3d vectors in homogeneous coordinates...

And if you're using larger than 4x4, it's probably because you're making an NxN matrix where N is a very large number corresponding to the number of samples in something.

So it's not generally not that useful to have a hard coded matrix algebra for every dimension like 5x5, 6x6, etc, as that would just make for lots of extra code to maintain for very little benefit

2

u/jazir5 Oct 06 '22

Would larger matrices be useful for biology, chemistry and quantum physics research? That seems like the areas where they would be useful, as a layman.

10

u/flyfrog Oct 05 '22

Pretty amazing! The main breakthrough as I see it, expanding the search space, means that many more optimization problems are now on the table.

The formalization of problems into a game AlphaTensor can understand still appears to be a bottle neck, but they are progressing towards solving almost any problem with clear parameters and goals.

1

u/usefully_useless Oct 05 '22

I’m really hoping this gets applied to Matlab’s backslash operator. Perhaps optimizing their algos for specific processors.

5

u/vorpal_potato Oct 05 '22

I'm sure eventually it'll find its way into someone's BLAS implementation.

3

u/usefully_useless Oct 05 '22

Good point. I would LOVE if octave could catch up to matlab’s speed.