Featured

Published on September 8th, 2020 📆 | 3804 Views ⚑

0

Micron Technology: DDR6X (NASDAQ:MU) | Seeking Alpha


https://www.ispeech.org

This baby [Micron Technology] will do a lot of ziggin' and zaggin' from here on its way to $40-50, or maybe even much higher. [Quote from the great late Russ Fischer.]

I last wrote here about Micron Technology (MU) on October 7, 2019. I've been largely out of the stock starting a few months after that article and it doesn't look like I've missed much. Now I'm sticking my big toe back in since I've been waiting for a concrete example of the de-commoditization of certain types of memory. Back in September 2018 I wrote that a few memory types were no longer a commodity. Be nice and call me early, not wrong.

So what's the announcement? Starting in mid-August a series of news reports about new Nvidia (NVDA) GPU's came out which sound a bit more like leaks than an organized new product rollout. Check out this article, entitled Micron Spills on GDDR6X. That article contains at least one dead link from information Micron hastily pulled down from its website. And here's the most recent excellent article from Tom's Hardware, with a provocative title: Micron Reveals GDDR6X Details: The Future of Memory, or a Proprietary DRAM? This nugget captures the essence:

Four-level pulse amplitude modulation (PAM4) signaling is the key feature of GDDR6X memory. This technique transmits two data bits per cycle using four signal levels, thus doubling the effective bandwidth for any operating frequency vs. previous-generation SGRAM types. In addition, PAM4 opens doors to higher data transfer rates (albeit at a cost). As a result, PAM4 improves both efficiency-per-clock and speeds.

Plus, as that article points out, Micron has patented this and is the sole producer at present since it has not yet been JDEC adopted.

Now normally this ink stained wretch is confined to brokerage reports and chit chats with hedge funds (who are VERY interested in the implications of this announcement for Micron stock.) But in this case a private company I own is using a nifty little $99 Nvidia Jetson Nano:

Source: Amazon.com

to take information on incoming parts from an Intel (INTC) $179 Real Sense Camera:

Source: Amazon.com

And feed very precise cartesian information on the incoming part to a used Fanuc robot that probably cost over $250,000 when new:

Source: fanuc.eu

So while the Jetson is way back in the LPDDR4 days, the device is successfully imaging at 30 frames per second. In the words of our vision guru, "The Jetson can't do everything we'd eventually like to do, and my very high end Lenovo laptop does now, like image at 60fps and do all the graphics processing we need. But it's doing a credible job for a fraction of the cost." As we speed up this process and get into real time defect detection and AI analysis, I'm delighted that Nvidia now has, with Micron's DDR6X, a much more robust offering that will allow us to grow with other members of the same GPU family.

So what's the big deal with this DDR6X technology? Basically it's a LOT more bandwidth at a lower power increase than the bandwidth increase, as this article from Anandtech indicates:

This is further backed up by Micron's second brief, which offers a power comparison that's normalized to GDDR6. There, Micron states that 21Gbps GDDR6X requires 15% less power per transferred bit than GDDR6 at 14Gbps. But as with Micron's first brief, this is efficiency per byte/bit, not total power consumption. So either way, it's clear that the total power consumption of GDDR6X is going to be higher than today's 14Gbps GDDR6, with Micron projecting 25-27% more power required for 21Gbps GDDR6X memory.

And why am I so excited about one chip type that, for the moment, is proprietary? Basically it's since the mighty Nvidia is architecting a new memory controller as that same Anandtech article points out:





One notable innovation here is that both GDDR6X hosts and memory devices ... now implement sub-receivers in order to achieve the fidelity required to read PAM4 modulation. Each I/O pin gets an Upper, Middle, and Lower receiver, which is used to compare the PAM4 signal against that respective portion of the data eye. So while one signal is still coming in on a pin, it's being viewed by three different receivers to best determine which of the four signal levels it is. [ellipsis and emphasis are author's]

So that means that Micron is building the PAM4 signaling into this patented DRAM, and Nvidia is building a special purpose controller to talk to it. And if Nvidia and Micron have accomplished that, might they also be working on in-memory processing like Micron's Automata? Or for a different tier of memory like the 5 year heartbreaker that is Micron's 3D Crosspoint [3DXpoint], where Micron has been in the stranglehold of Intel who has slow rolled chipsets that can utilize 3DXpoint? I think the answer to both points is "Yes!", and therein lies my newfound interest in the slumbering Micron which may be about to wake up from a nap, and exhibit newfound vigor. And now clearly Micron is no longer just reliant on Intel (INTC) to develop, produce, and market specialized controllers for some of its memory. Maybe the ink is drying on the Lehi, Utah divorce where Micron and Intel split ways some time ago on their once jointly owned facility for new memories. And clearly Micron is once again dating.

Micron and Nvidia may only have a platonic relationship. But the currently befuddled Intel, with its revolving door of managers, inability to stay outfront on migrations to smaller chip geometries, and measly $211 Billion market cap will take note that Nvidia, with its $311 billion market cap, is making significant inroads into its GPU marketshare (read on), and is working very, very closely with its former partner Micron.

What about other uses? While my own familiarity (and age?) has me concentrating on machine vision and AI for this product, I am or course familiar with the giant gaming market. My in-house vision guy said, "... it looks like the gaming industry is set for an upgrade in graphics capabilities. Which means people will likely need to upgrade chips to keep up with new games." And Nvidia CEO Jensen Huang of Nvidia pounded the point home:

Today's launch of Nvidia Ampere GPUs is a giant step into the future, the work of thousands of engineering years, the GeForce RTX 30 Series delivers our greatest generational leap ever. Nvidia RTX fuses programmable shading, ray tracing and AI for developers to create entirely new worlds. Twenty years from now, we'll look back and realize that the future of gaming started here."

OK. Great for Micron. What about Nvidia? Alas I haven't been closely following the rocketship ride that is Nvidia. Although with these announcements I intend to drill in more.

Nvidia is in an interesting position to grab market share in the graphics card market since they are near the back of the pack :

As of the second quarter of 2020, Intel was the biggest vendor in the PC GPU market worldwide, occupying 64 percent of the market. AMD, who has shipped over 500 hundred million GPUs since 2013, occupied 18 percent of the market, whilst Nvidia took a market share of 19 percent.

Wait! The number 4 share AMD sold 500 million GPU's since 2013? Hmmm...that's a pretty big market!

I admit to being baffled why Softbank paid $30 Billion for ARM. And I am only somewhat less baffled why Nvidia might buy it now for a similar price. Sure there's lots of ARM/Nvidia tech crossover but do they need to own it? With Nvidia's $311 Billion market cap and Micron's $52 Billion market cap perhaps Nvidia can swallow both. For me, I would think that owning the last US memory maker and being able to tailor its memories and give Nvidia further distinction in its products would be desirable. But I have more work to do.

Conclusion. I wrote in September 2018 that certain Micron memories were becoming less commodity in nature. I've been wrong for too long but think this Nvidia/Micron GDDR6X announcement is a reason to get back into Micron stock. This announcement is great for this one memory type but is even better if it means Nvidia is developing other memory controllers, and/or wants to do more than date.

I miss Russ Fischer and his writings here about Micron, Intel, Nvidia and more. He would love these developments and would have great guidance for us based on them. RIP Russ. In Russ's honor herewith one of his favorite tee shirts from his last days:

Disclosure: I am/we are long MU. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.


Source link

Tagged with:



Comments are closed.