\"\"
<\/span><\/figcaption><\/figure>By Ryan Perera<\/a>, Vice President, Asia Content & Subsea Networks, India & the subcontinent, Ciena<\/a>
<\/strong>
Data is being created everywhere, in and around our homes, offices, factories and machines. And, as enterprises pursue digital transformation and continue to evolve toward Industry 4.0, data growth will be further driven by the use of
Digital Twins<\/a> strategies using connected Internet of Things (IoT), cognitive services and cloud computing services. New, emerging applications like the Metaverse<\/a> will also drive growth and put more pressure on our underlying communication networks. In fact, Credit Suisse<\/a>1<\/sup> estimates that the increasing interest in the Metaverse’s immersive applications and 3D environment will require telecom access networks to support 24 times more data usage in the next ten years that must be delivered reliably, cost-effectively and with lower latency.

With exabytes of data being created daily, data lakes are being used by enterprises and public cloud providers to process, store and transform data to bring insights and improve consumer experiences. These large bodies of data are now becoming Centers of
[Data] Gravity<\/a>2 <\/sup>for enterprise systems, bringing other data and applications close, similar to the effect that gravity has on objects around a planet. As the (data) mass increases, so does the strength of (data) gravitational pull. In the past, data centers were built closer to locations optimal for space and power. Now, the storage-oriented ‘data lakes’ are being built closer to end users, and these data lakes with CPU\/GPU power are pulling applications and workloads toward them.

The effect of
data gravity<\/a><\/strong>

Digital Realty’s
Data Gravity Index<\/a>3<\/sup> report estimates that by 2024, the G2000 Enterprises across 53 metros are expected to create 1.4 million gigabytes per second, process an additional 30 petaflops and store an additional 622 terabytes per second. This will certainly amplify data gravity. Data Gravity Intensity<\/a>4<\/sup>, which is determined by data mass, level of data activity, bandwidth and, of course, latency, is expected to see a 153% CAGR in the Asia Pacific region, with certain metros having larger attraction.

\"\"
<\/span><\/figcaption><\/figure>Figure 1: Data Gravity Centers in Asia
<\/em>
Data Gravity Intensity in Asia Pacific is mostly where large public data center regions are located. These centers (red color bubbles with capacity in megawatts shown in Figure 1) are being well served by both terrestrial and submarine networks (blue cylinders with capacity in terabits\/s). Additionally, more than 17 new open-line submarine cable systems are expected to be commissioned between 2023 and 2025 to interconnect these regions with the lowest latency and highest spectral efficiencies. Leading regional telecom providers are partnering with public cloud providers to build these new submarine network corridors.

Given the ever-increasing gravitational pull of these data clusters, we expect the clusters to further grow, while pulling other smaller clusters to be built closer. As can be seen in Figure 1, the high-intensity data gravity sites are mostly in highly populated urban metros. To mitigate power and space limitations, we see these data centers growing in cluster fashion over optical WAN mesh underlay networks, including campus-type data center clusters. Gone are the days when hyper-mega data centers are built in remote locations around the world.

Data gravity can, however, create unforeseen challenges to digital transformation when factoring business locations, proximity to users (latency), bandwidth (availability and cost), regulatory constraints, compliance and data privacy. Public clouds, with their vast portfolio of services, have long been seen as the obvious destination to which enterprises move all their workloads. But, given the egress costs, data security, overdependency and disaster recovery concerns, the majority of enterprises are now pursuing hybrid multi-cloud strategies while trying to navigate data gravity barriers.

\"\"
<\/span><\/figcaption><\/figure>Figure 2: Data Creation Cycle & Data Gravitational Pull
<\/em>
Navigating data gravity barriers
<\/strong>
To address the challenges of data gravity, enterprises are fast adopting neutral co-location sites (centers of data exchange) to store data with low-latency connectivity to both public and on-premise clouds. In fact,
451 Research<\/a>5<\/sup> found that 63% of enterprises still own and operate data center facilities and many expect to leverage third-party\/colocation sites such as Multi-Tenant Data Centers<\/a> (MTDCs) with access to multi-cloud and other ecosystems, while navigating disaster recovery and data gravity barriers.

The distributed infrastructure of computing, network and storage will increasingly involve specialized resources such as chipsets for artificial intelligence (AI) training and inference versus general-purpose applications. Furthermore, edge cloud systems would be of limited scale given the space and power constraints. Thus, to avoid stranding resources, the industry has identified the need for a
Balanced System<\/a>6<\/sup>. This points toward achieving optimal use of the distributed computing, storage and network connectivity resources. Additionally, a declarative programming model is required to achieve this balanced system and to tightly couple the application context with the infrastructure state. Furthermore, in an Application Driven Networking<\/a> paradigm, applications care about completion times for Remote Procedure Call (RPC) sessions between compute nodes, and not just about connection latencies. This ecosystem of network operators, such as public cloud providers, MTDCs and telecom service providers, must participate in this paradigm with a scalable and programmable network infrastructure along with exposing the relevant APIs to the application providers.

How can the industry adapt?
<\/strong>
In an era of distributed Centers of [Data] Gravity, MTDCs will play a vital role. MTDCs will serve as co-location & data exchange points with high capacity & low latency interconnections to the public clouds, mitigating the data gravity barriers for enterprises.

Additionally, in a distributed cloud computing environment, a Balanced System is required more than ever before, with tighter coupling between the application context and network state. Network providers and the vendor ecosystem have a key role to play in building adaptive networks that are scalable and programmable with the relevant API exposure to application providers.

Disclaimer: All views, opinions and data expressed here are solely by the author and for general information only. The views do not make representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, availability, reliability, or completeness of any information in this blog. <\/em>

References: <\/strong>
  1. https:\/\/www.credit-suisse.com\/media\/assets\/corporate\/docs\/about-us\/media\/media-release\/2022\/03\/metaverse-14032022.pdf<\/a><\/li>
  2. https:\/\/datacentremagazine.com\/technology-and-ai\/what-data-gravity<\/a> <\/li>
  3. https:\/\/www.digitalrealty.asia\/platform-digital\/data-gravity-index<\/a><\/li>
  4. https:\/\/futurecio.tech\/understanding-data-gravity-intensity-traps-and-opportunities-in-2021\/<\/a> <\/li>
  5. https:\/\/go.451research.com\/2020-mi-trends-driving-multi-tenant-datacenter-service-industry.html<\/a><\/li>
  6. https:\/\/www.youtube.com\/watch?v=Am_itCzkaE0<\/a><\/li><\/ol><\/body>","next_sibling":[{"msid":92585659,"title":"HCL Tech launches continuous cloud modernization solution for AWS","entity_type":"ARTICLE","link":"\/news\/hcl-tech-launches-continuous-cloud-modernization-solution-for-aws\/92585659","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[],"msid":92585727,"entity_type":"ARTICLE","title":"Networking with data gravity","synopsis":"(Brand Connect Initiative)","titleseo":"telecomnews\/networking-with-data-gravity","status":"ACTIVE","authors":[],"analytics":{"comments":0,"views":1051,"shares":0,"engagementtimems":5036000},"Alttitle":{"minfo":""},"artag":"ETTelecom","artdate":"2022-07-01 08:13:27","lastupd":"2022-07-01 08:14:48","breadcrumbTags":["Ciena","metaverse","ryan perera","asia data gravity intensity","digital twins","application driven networking","data gravity","what is data gravity","multi-tenant data centers"],"secinfo":{"seolocation":"telecomnews\/networking-with-data-gravity"}}" data-authors="[" "]" data-category-name="" data-category_id="" data-date="2022-07-01" data-index="article_1">

    网络与数据引力

    (品牌连接计划)

    • 更新于2022年7月1日08:14点坚持
    阅读: 100年行业专业人士
    读者的形象读到100年行业专业人士
    通过瑞安·佩雷拉亚洲副总裁&水下网络内容,印度和南亚次大陆Ciena

    数据被创建的无处不在,在我们的家庭,办公室,工厂和机器。作为企业追求数字转换和继续发展向4.0产业,数据增长将进一步推动使用数字的双胞胎策略使用连接物联网(物联网),认知服务和云计算服务。如新,新兴应用程序Metaverse还将推动增长和底层通信网络施加更大的压力。事实上,瑞士信贷(Credit Suisse) 1估计,增加兴趣Metaverse的沉浸式应用程序和3 d环境需要电信接入网络支持24倍数据使用在接下来的十年里必须可靠,成本效益和更低的延迟。

    广告
    艾字节的数据创建日报数据湖泊被企业和公共云提供商使用过程中,存储和变换数据带来的见解和改进用户体验。这些大数据正在成为中心的尸体(数据)重力 2企业系统,将其他数据和应用程序关闭,类似于重力的作用对象在一个星球上。随着质量(数据)的增加,那么(数据)的引力的力量。在过去,数据中心建成接近最优位置空间和权力。现在,storage-oriented数据湖泊的正在建造更接近最终用户,这些数据湖泊与CPU / GPU能力对他们把应用程序和工作负载。

    的影响重力数据

    数字房产的重力数据索引 3报告估计,到2024年,预计全球53个地铁G2000企业创建140万字节每秒处理额外的30次,存储一个额外的622字节每秒。这肯定会放大数据引力。数据引力强度 4,这是由数据质量,活动水平数据,带宽和,当然,延迟,预计在亚太地区的复合年增长率是153%,与某些地铁有更大的吸引力。

    图1:重力数据中心在亚洲

    亚太地区数据引力强度主要是在大型公共数据中心地区。这些中心(红色泡沫在兆瓦容量如图1所示)被陆地和海底网络提供良好的服务(蓝色气缸能力位/秒)。此外,超过17个新明线海底电缆系统预计将委托在2023年和2025年之间互连这些地区最低的延迟和光谱效率最高。主要地区的电信提供商与公共云提供商建立这些新的潜艇网络通道。

    广告
    鉴于这些数据集群的不断增加的引力,我们预计集群进一步增长,而把其他小集群建设。我们可以看到在图1中,重力高强度数据网站大多是在高度密集的城市地铁。缓解电力和空间限制,我们看到这些数据中心越来越以集群方式光学WAN网衬底网络,包括campus-type数据中心集群。的日子一去不复返了hyper-mega世界各地的数据中心建在偏远地区。

    然而,数据引力可以带来不可预见的挑战到数字转换当保理业务的位置,靠近用户(延迟),带宽(可用性和成本),监管约束、遵从性和数据隐私。公共云,巨大的投资组合的服务,长期以来被视为企业的明显的目的地将所有工作负载。但是,鉴于出口成本,数据安全,overdependency和灾难恢复问题,现在大多数企业追求混合多重云战略在试图导航数据引力壁垒。

    图2:数据创建周期&数据引力

    导航数据引力壁垒

    解决重力数据的挑战,企业快速采用中性协同定位网站(数据交换中心)来存储数据和低延迟连接公共和内部云。事实上,451个研究 5发现,63%的企业仍然拥有和运营的数据中心设施和许多人期望利用第三方/主机托管等网站多租户数据中心(MTDCs)获得多重云和其他生态系统,而在灾难恢复和数据引力壁垒。

    分布式计算基础设施,网络和存储将越来越多地涉及到专门的资源芯片组等人工智能(AI)培训和推理和通用应用程序。此外,边云系统的有限规模的空间和权力约束。因此,为了避免滞留资源,需要一个确定的行业平衡系统 6。这指向实现最优使用分布式计算,存储和网络连接资源。此外,声明式编程模型需要实现这一平衡的系统和应用程序上下文的基础设施层紧密地绑定在一起的状态。此外,在一个应用程序驱动的网络范式,应用关心完成时间远程过程调用(RPC)计算节点之间的会话,而不只是关于连接延迟。这个生态系统的网络运营商,如公共云提供商,MTDCs和电信服务提供商,必须参与这个范式与一个可扩展的和可编程的网络基础设施以及暴露相关的api的应用程序提供商。

    该行业如何适应?

    在一个分布式的时代中心(数据)重力,MTDCs将发挥至关重要的作用。MTDCs将作为协同定位和数据交换分高容量和低延迟连接到公共云,减轻重力障碍为企业的数据。

    此外,在一个分布式云计算环境中,比以往任何时候都更需要一个平衡的系统,应用程序上下文之间的紧耦合和网络状态。网络提供商和供应商生态系统中扮演重要角色,建立适应网络与相关的可伸缩和可编程API应用程序提供商。

    免责声明:所有的观点、意见和数据表示这是完全由作者和一般信息。视图不做任何陈述或保证,明示或暗示,关于准确性、充分性、有效性、可用性、可靠性和完整性的任何信息在这个博客。

    引用:
    1. https://www.credit - suisse.com/media/assets/corporate/docs/about us/media/media release/2022/03/metaverse - 14032022. - pdf
    2. https://datacentremagazine.com/technology-and-ai/what-data-gravity
    3. https://www.digitalrealty.asia/platform-digital/data-gravity-index
    4. https://futurecio.tech/understanding-data-gravity-intensity-traps-and-opportunities-in-2021/
    5. https://go.451research.com/2020-mi-trends-driving-multi-tenant-datacenter-service-industry.html
    6. https://www.youtube.com/watch?v=Am_itCzkaE0
    • 发布于2022年7月1日08:13点坚持
    是第一个发表评论。
    现在评论

    加入2 m +行业专业人士的社区

    订阅我们的通讯最新见解与分析。乐动扑克

    下载ETTelec乐动娱乐招聘om应用

    • 得到实时更新
    • 保存您最喜爱的文章
    扫描下载应用程序
\"\"
<\/span><\/figcaption><\/figure>By Ryan Perera<\/a>, Vice President, Asia Content & Subsea Networks, India & the subcontinent, Ciena<\/a>
<\/strong>
Data is being created everywhere, in and around our homes, offices, factories and machines. And, as enterprises pursue digital transformation and continue to evolve toward Industry 4.0, data growth will be further driven by the use of
Digital Twins<\/a> strategies using connected Internet of Things (IoT), cognitive services and cloud computing services. New, emerging applications like the Metaverse<\/a> will also drive growth and put more pressure on our underlying communication networks. In fact, Credit Suisse<\/a>1<\/sup> estimates that the increasing interest in the Metaverse’s immersive applications and 3D environment will require telecom access networks to support 24 times more data usage in the next ten years that must be delivered reliably, cost-effectively and with lower latency.

With exabytes of data being created daily, data lakes are being used by enterprises and public cloud providers to process, store and transform data to bring insights and improve consumer experiences. These large bodies of data are now becoming Centers of
[Data] Gravity<\/a>2 <\/sup>for enterprise systems, bringing other data and applications close, similar to the effect that gravity has on objects around a planet. As the (data) mass increases, so does the strength of (data) gravitational pull. In the past, data centers were built closer to locations optimal for space and power. Now, the storage-oriented ‘data lakes’ are being built closer to end users, and these data lakes with CPU\/GPU power are pulling applications and workloads toward them.

The effect of
data gravity<\/a><\/strong>

Digital Realty’s
Data Gravity Index<\/a>3<\/sup> report estimates that by 2024, the G2000 Enterprises across 53 metros are expected to create 1.4 million gigabytes per second, process an additional 30 petaflops and store an additional 622 terabytes per second. This will certainly amplify data gravity. Data Gravity Intensity<\/a>4<\/sup>, which is determined by data mass, level of data activity, bandwidth and, of course, latency, is expected to see a 153% CAGR in the Asia Pacific region, with certain metros having larger attraction.

\"\"
<\/span><\/figcaption><\/figure>Figure 1: Data Gravity Centers in Asia
<\/em>
Data Gravity Intensity in Asia Pacific is mostly where large public data center regions are located. These centers (red color bubbles with capacity in megawatts shown in Figure 1) are being well served by both terrestrial and submarine networks (blue cylinders with capacity in terabits\/s). Additionally, more than 17 new open-line submarine cable systems are expected to be commissioned between 2023 and 2025 to interconnect these regions with the lowest latency and highest spectral efficiencies. Leading regional telecom providers are partnering with public cloud providers to build these new submarine network corridors.

Given the ever-increasing gravitational pull of these data clusters, we expect the clusters to further grow, while pulling other smaller clusters to be built closer. As can be seen in Figure 1, the high-intensity data gravity sites are mostly in highly populated urban metros. To mitigate power and space limitations, we see these data centers growing in cluster fashion over optical WAN mesh underlay networks, including campus-type data center clusters. Gone are the days when hyper-mega data centers are built in remote locations around the world.

Data gravity can, however, create unforeseen challenges to digital transformation when factoring business locations, proximity to users (latency), bandwidth (availability and cost), regulatory constraints, compliance and data privacy. Public clouds, with their vast portfolio of services, have long been seen as the obvious destination to which enterprises move all their workloads. But, given the egress costs, data security, overdependency and disaster recovery concerns, the majority of enterprises are now pursuing hybrid multi-cloud strategies while trying to navigate data gravity barriers.

\"\"
<\/span><\/figcaption><\/figure>Figure 2: Data Creation Cycle & Data Gravitational Pull
<\/em>
Navigating data gravity barriers
<\/strong>
To address the challenges of data gravity, enterprises are fast adopting neutral co-location sites (centers of data exchange) to store data with low-latency connectivity to both public and on-premise clouds. In fact,
451 Research<\/a>5<\/sup> found that 63% of enterprises still own and operate data center facilities and many expect to leverage third-party\/colocation sites such as Multi-Tenant Data Centers<\/a> (MTDCs) with access to multi-cloud and other ecosystems, while navigating disaster recovery and data gravity barriers.

The distributed infrastructure of computing, network and storage will increasingly involve specialized resources such as chipsets for artificial intelligence (AI) training and inference versus general-purpose applications. Furthermore, edge cloud systems would be of limited scale given the space and power constraints. Thus, to avoid stranding resources, the industry has identified the need for a
Balanced System<\/a>6<\/sup>. This points toward achieving optimal use of the distributed computing, storage and network connectivity resources. Additionally, a declarative programming model is required to achieve this balanced system and to tightly couple the application context with the infrastructure state. Furthermore, in an Application Driven Networking<\/a> paradigm, applications care about completion times for Remote Procedure Call (RPC) sessions between compute nodes, and not just about connection latencies. This ecosystem of network operators, such as public cloud providers, MTDCs and telecom service providers, must participate in this paradigm with a scalable and programmable network infrastructure along with exposing the relevant APIs to the application providers.

How can the industry adapt?
<\/strong>
In an era of distributed Centers of [Data] Gravity, MTDCs will play a vital role. MTDCs will serve as co-location & data exchange points with high capacity & low latency interconnections to the public clouds, mitigating the data gravity barriers for enterprises.

Additionally, in a distributed cloud computing environment, a Balanced System is required more than ever before, with tighter coupling between the application context and network state. Network providers and the vendor ecosystem have a key role to play in building adaptive networks that are scalable and programmable with the relevant API exposure to application providers.

Disclaimer: All views, opinions and data expressed here are solely by the author and for general information only. The views do not make representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, availability, reliability, or completeness of any information in this blog. <\/em>

References: <\/strong>
  1. https:\/\/www.credit-suisse.com\/media\/assets\/corporate\/docs\/about-us\/media\/media-release\/2022\/03\/metaverse-14032022.pdf<\/a><\/li>
  2. https:\/\/datacentremagazine.com\/technology-and-ai\/what-data-gravity<\/a> <\/li>
  3. https:\/\/www.digitalrealty.asia\/platform-digital\/data-gravity-index<\/a><\/li>
  4. https:\/\/futurecio.tech\/understanding-data-gravity-intensity-traps-and-opportunities-in-2021\/<\/a> <\/li>
  5. https:\/\/go.451research.com\/2020-mi-trends-driving-multi-tenant-datacenter-service-industry.html<\/a><\/li>
  6. https:\/\/www.youtube.com\/watch?v=Am_itCzkaE0<\/a><\/li><\/ol><\/body>","next_sibling":[{"msid":92585659,"title":"HCL Tech launches continuous cloud modernization solution for AWS","entity_type":"ARTICLE","link":"\/news\/hcl-tech-launches-continuous-cloud-modernization-solution-for-aws\/92585659","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[],"msid":92585727,"entity_type":"ARTICLE","title":"Networking with data gravity","synopsis":"(Brand Connect Initiative)","titleseo":"telecomnews\/networking-with-data-gravity","status":"ACTIVE","authors":[],"analytics":{"comments":0,"views":1051,"shares":0,"engagementtimems":5036000},"Alttitle":{"minfo":""},"artag":"ETTelecom","artdate":"2022-07-01 08:13:27","lastupd":"2022-07-01 08:14:48","breadcrumbTags":["Ciena","metaverse","ryan perera","asia data gravity intensity","digital twins","application driven networking","data gravity","what is data gravity","multi-tenant data centers"],"secinfo":{"seolocation":"telecomnews\/networking-with-data-gravity"}}" data-news_link="//www.iser-br.com/news/networking-with-data-gravity/92585727">