tests: Fix wrong filename and description in test_srv6_locator.pyThe file header comment and module docstring carry a wrong filename
and description. Update both to match the actual filename and describe
what the test does.
Signed-off-by: Carmine Scarpitta <cscarpit@cisco.com>
tools: Use numeric only version for pkgconfigVersion: @PACKAGE_VERSION@ in frr.pc.in expands to something like 10.6-dev_git
when built with --with-pkg-git-version, and pkgconf on Alpine strictly
validates version.
Fixes: bc8f749c6e3502c93d65689eede2611b4dbbe2f5 ("build: add pkg-config file")
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
zebra: Initialize nl_errno68.72 zebra/zebra_netns_id.c: In function 'zebra_ns_id_get':
68.72 zebra/zebra_netns_id.c:267:34: warning: 'nl_errno' may be used uninitialized [-Wmaybe-uninitialized]
68.72 267 | if (ret != 0 && nl_errno != EEXIST) {
68.72 | ^
68.72 zebra/zebra_netns_id.c:163:13: note: 'nl_errno' was declared here
68.72 163 | int nl_errno;
68.72 ...
zebra: EVPN fix stale remote vtep entriesThe EVPN route delete paths in process_subq_early_route_add
(route replace) and process_subq_early_route_delete used
ere->afi (route address family) to determine the VTEP IP type.
For IPv4 routes with IPv6 VTEP endpoints, this incorrectly
created an IPv4 vtep_ip when the nexthop is actually IPv6.
The add path already correctly uses tmp_nh->type (nexthop type).
The mismatch meant delete never m...
lib: add rbtree pop_final apiAdd a simple-minded cleanup api that lets a caller pop
items from an rbtree without undergoing rebalancing.
Signed-off-by: Mark Stapp <mjs@cisco.com>
ospf6d: clear local ifp per ECMP path rebuildDuring intra-prefix ECMP recomputation, ifp was reused across\nold_route->paths iterations. That can carry a previously matched\ndirect-connected interface into an unrelated path and install an\non-link nexthop incorrectly.\n\nReset ifp at each path iteration before evaluating the current\norigin/path context.
Signed-off-by: Andreas Florath <Andreas.Florath@telekom.de>
bfdd: Allow for command completions to work with peersAdd the ability for bfdd to tell you more about the bfd peers
when you ask for command completion help:
eva# show bfd peer
1::2 2603:6080:602:509e:f6d2:e774:dfce:4b99
eva# show bfd peer 2603:6080:602:509e:f6d2:e774:dfce:4b99
BFD Peer:
peer 2603:6080:602:509e:f6d2:e774:dfce:4b99 local-address 2603:6080:602:509e:f6d2:e774:dfce:4b08 vrf default interface enp13s...
bfdd: Fix `show bfd peers brief` to display local address in some casesWhen the bfdd peering has not been established if you have a local peer
when you do a `show bfd peers brief` the local address is listed as unknown.
Which is poppycock:
bfd
peer 2603:6080:602:509e:f6d2:e774:dfce:4b99 local-address 2603:6080:602:509e:f6d2:e774:dfce:4b08 interface enp13s0
exit
!
exit
!
end
eva# show bfd peers brief
Session count: 1
SessionId LocalAddress ...
bfdd: Look up the bfdd peer a bit earlier on packet receptionThe bfdd code looks up the bfdd peer very late in the packet reception
path. Move the peer lookup to much earlier, mainly so that the bad packets
received can be associated with the correct peer.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
bfdd: Remove unnecessary NULL checkbfdd is finding the bfd session and if we do not find one,
the function returns. The very next if statement is checking to see
if the bfd pointer is NULL. We know it is not NULL there.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
bfdd, yang: Add a bad packet counter for bfd peersCurrently bfdd completely ignores bad packets received and
there is no way to know that a bad packet has been coming
in unless you infer it through other means. This is not
easy for a operator to do.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
doc: bgp: add entry for `neighbor PEER soft-reconfiguration inbound`Add short entry explaining the `soft-reconfiguration inbound` command.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
ci: harden MIB downloads and add shared workflow cacheAdd a dedicated CI cache preparation flow for MIB files and restore
that cache in build jobs so Docker builds reuse cached MIB assets
instead of repeatedly downloading from external links.
Signed-off-by: Jafar Al-Gharaibeh <jafar@atcorp.com>
ci: set artifact retention and upload strictnessSet explicit retention periods for build and test artifacts
and define if-no-files-found behavior to improve CI storage
hygiene and upload diagnostics.
Signed-off-by: Jafar Al-Gharaibeh <jafar@atcorp.com>
bgpd: change L2 attr displayL2attr display is too long:
> L2: P flag:Y, B Flag Y, C word Y, MTU 1500
change to
> L2: Cflags CPB, MTU 1500
Signed-off-by: Loïc Sang <loic.sang@6wind.com>
Merge pull request #20917 from nishant111/nishant/bgp_fib_suppress_stale_fixbgpd: Fix routes to be removed from rib when suppress fib pending is configed
tests: Add a topotest that shows that metaQ deduplication works for NHGTest that the MetaQ deduplipication is working as expected.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
zebra: Add a hidden command `[no] zebra test metaq disable`Add the ability to plug the zebra metaQ to allow for testing
of deduplication.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
zebra: Limit NHG metaQ to only 1 item per NHG IDCurrently if there is a large number of changes going on
via received NHG's and Zebra is extremely busy doing other
work as well, It is possible that the received NHG list
for processing in the metaQ is quite large. This is especially
problematic at scale. Modify the code such that the MetaQ
looks to see if the NHG being modified is already in the list.
If so just remove the old one and keep ...
zebra: Keep high water mark for some queuesThe dplane provider incoming and outgoing queues back to
zebra were not keeping the high water marks for them. Add them
eva# show zebra dplane providers
dataplane Incoming Queue from Zebra: 0, q_max: 5
Zebra dataplane providers:
Kernel (1): in: 77, q: 0, q_max: 5, out: 77, q: 0, q_max: 5
dataplane Outgoing Queue to Zebra: 0, q_max: 30
eva#
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
tests: Show a ordering bug in test_bgp_nhc.pyWhen r6 intentionally brings up bgp peering towards
r7 and r8 first and then brings up the r1 peering
the bgp_nhc feature is not working correctly.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>