Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing ipv6 loopback addresses on the linecards to be on different subnets and adding t2 as a supported topology in veos #3797

Merged
merged 1 commit into from Jul 21, 2021

Conversation

sanmalho-git
Copy link
Contributor

Description of PR

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 201911

Approach

What is the motivation for this PR?

In t2 topology, we have 2 linecards, and we were assigning two different IPv6 addresses:

  • fc00:10::1/128
  • fc00:10::2/128

However, in bgpd.main.conf.j2 in sonic-buildimage (http://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-fpm-frr/frr/bgpd/bgpd.main.conf.j2#L77), ipv6 Loopback0 addresses are using a 64 bit mask. Thus, the route to the Loopback0 address of the remote linecard was being masked by the local Loopback0 address and not pointing to the inband port.

How did you do it?

Fix for this is to use different subnets for each linecard. So, changing the Loopback0 addresses to:

  • fc00:10::1/128
  • fc00:11::1/128

Also, t2 topology was missing from veos as a supported topology

How did you verify/test it?

Ran voq suites in tests/voq directory with the changes above.

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

…subnets and adding t2 as a supported topology in veos

In bgpd.main.conf.j2 in sonic-buildimage (http://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-fpm-frr/frr/bgpd/bgpd.main.conf.j2#L77),
Ipv6 Loopback0 addresses are using a 64 bit mask. In t2 topology, we have 2 linecards, and we were assigning two different IPv6 addresses:

- fc00:10::1/128
- fc00:10::2/128

However, in bgpd.main.conf.j2 in sonic-buildimage (http://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-fpm-frr/frr/bgpd/bgpd.main.conf.j2#L77),
ipv6 Loopback0 addresses are using a 64 bit mask. Thus, the route to the Loopback0 address of the remote linecard was being masked by the local Loopback0 address,
and not pointing to the inband port.

Fix for this is to use different subnets for each linecard. So, changing the Loopback0 addresses to:
- fc00:10::1/128
- fc00:11::1/128

Also, t2 topology was missing from veos as a supported topology
@sanmalho-git sanmalho-git requested a review from a team as a code owner July 14, 2021 13:58
@abdosi abdosi merged commit 1d66a2b into sonic-net:master Jul 21, 2021
vmittal-msft pushed a commit to vmittal-msft/sonic-mgmt that referenced this pull request Sep 28, 2021
…subnets and adding t2 as a supported topology in veos (sonic-net#3797)

In bgpd.main.conf.j2 in sonic-buildimage (http://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-fpm-frr/frr/bgpd/bgpd.main.conf.j2#L77),
Ipv6 Loopback0 addresses are using a 64 bit mask. In t2 topology, we have 2 linecards, and we were assigning two different IPv6 addresses:

- fc00:10::1/128
- fc00:10::2/128

However, in bgpd.main.conf.j2 in sonic-buildimage (http://github.com/Azure/sonic-buildimage/blob/master/dockers/docker-fpm-frr/frr/bgpd/bgpd.main.conf.j2#L77),
ipv6 Loopback0 addresses are using a 64 bit mask. Thus, the route to the Loopback0 address of the remote linecard was being masked by the local Loopback0 address,
and not pointing to the inband port.

Fix for this is to use different subnets for each linecard. So, changing the Loopback0 addresses to:
- fc00:10::1/128
- fc00:11::1/128

Also, t2 topology was missing from veos as a supported topology
@sanmalho-git sanmalho-git deleted the topo_t2 branch April 15, 2022 15:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants