国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Istio實(shí)戰(zhàn)(十一)-Envoy 請(qǐng)求解析(下)

這篇具有很好參考價(jià)值的文章主要介紹了Istio實(shí)戰(zhàn)(十一)-Envoy 請(qǐng)求解析(下)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

前言

Envoy 是一款面向 Service Mesh 的高性能網(wǎng)絡(luò)代理服務(wù)。它與應(yīng)用程序并行運(yùn)行,通過以平臺(tái)無關(guān)的方式提供通用功能來抽象網(wǎng)絡(luò)。當(dāng)基礎(chǔ)架構(gòu)中的所有服務(wù)流量都通過 Envoy 網(wǎng)格時(shí),通過一致的可觀測(cè)性,很容易地查看問題區(qū)域,調(diào)整整體性能。

Envoy也是istio的核心組件之一,以 sidecar 的方式與服務(wù)運(yùn)行在一起,對(duì)服務(wù)的流量進(jìn)行攔截轉(zhuǎn)發(fā),具有路由,流量控制等等強(qiáng)大特性。本系列文章,我們將不局限于istio,envoy的官方文檔,從源碼級(jí)別切入,分享Envoy啟動(dòng)、流量劫持、http 請(qǐng)求處理流程的進(jìn)階應(yīng)用實(shí)例,深度分析Envoy架構(gòu)。

本篇將是Envoy請(qǐng)求流程源碼解析的第三篇,主要分享Envoy的outbound方向下篇,包含:接收請(qǐng)求、發(fā)送請(qǐng)求、接收響應(yīng)、返回響應(yīng)。注:本文中所討論的issue和pr基于21年12月。

outbound方向

接收請(qǐng)求

  1. client開始向socket寫入請(qǐng)求數(shù)據(jù)
  2. eventloop在觸發(fā)read event后,transport_socket_.doRead中會(huì)循環(huán)讀取加入read_buffer_,直到返回EAGAIN
   void ConnectionImpl::onReadReady() {
     ENVOY_CONN_LOG(trace, "read ready. dispatch_buffered_data={}", *this, dispatch_buffered_data_);
     const bool latched_dispatch_buffered_data = dispatch_buffered_data_;
     dispatch_buffered_data_ = false;
    
     ASSERT(!connecting_);
    
     // We get here while read disabled in two ways.
     // 1) There was a call to setTransportSocketIsReadable(), for example if a raw buffer socket ceded
     //    due to shouldDrainReadBuffer(). In this case we defer the event until the socket is read
     //    enabled.
     // 2) The consumer of connection data called readDisable(true), and instead of reading from the
     //    socket we simply need to dispatch already read data.
     if (read_disable_count_ != 0) {
       // Do not clear transport_wants_read_ when returning early; the early return skips the transport
       // socket doRead call.
       if (latched_dispatch_buffered_data && filterChainWantsData()) {
         onRead(read_buffer_->length());
       }
       return;
     }
    
     // Clear transport_wants_read_ just before the call to doRead. This is the only way to ensure that
     // the transport socket read resumption happens as requested; onReadReady() returns early without
     // reading from the transport if the read buffer is above high watermark at the start of the
     // method.
     transport_wants_read_ = false;
     IoResult result = transport_socket_->doRead(*read_buffer_);
     uint64_t new_buffer_size = read_buffer_->length();
     updateReadBufferStats(result.bytes_processed_, new_buffer_size);
    
     // If this connection doesn't have half-close semantics, translate end_stream into
     // a connection close.
     if ((!enable_half_close_ && result.end_stream_read_)) {
       result.end_stream_read_ = false;
       result.action_ = PostIoAction::Close;
     }
    
     read_end_stream_ |= result.end_stream_read_;
     if (result.bytes_processed_ != 0 || result.end_stream_read_ ||
         (latched_dispatch_buffered_data && read_buffer_->length() > 0)) {
       // Skip onRead if no bytes were processed unless we explicitly want to force onRead for
       // buffered data. For instance, skip onRead if the connection was closed without producing
       // more data.
       onRead(new_buffer_size);
     }
    
     // The read callback may have already closed the connection.
     if (result.action_ == PostIoAction::Close || bothSidesHalfClosed()) {
       ENVOY_CONN_LOG(debug, "remote close", *this);
       closeSocket(ConnectionEvent::RemoteClose);
     }
   }
  1. 把buffer傳入Envoy::Http::ConnectionManagerImpl::onData進(jìn)行HTTP請(qǐng)求的處理
    Network::FilterStatus ConnectionManagerImpl::onData(Buffer::Instance& data, bool) {
      if (!codec_) {
        // Http3 codec should have been instantiated by now.
        createCodec(data);
      }
     
      bool redispatch;
      do {
        redispatch = false;
     
        const Status status = codec_->dispatch(data);
     
        if (isBufferFloodError(status) || isInboundFramesWithEmptyPayloadError(status)) {
          handleCodecError(status.message());
          return Network::FilterStatus::StopIteration;
        } else if (isCodecProtocolError(status)) {
          stats_.named_.downstream_cx_protocol_error_.inc();
          handleCodecError(status.message());
          return Network::FilterStatus::StopIteration;
        }
        ASSERT(status.ok());
  1. 如果codec_type是AUTO(HTTP1,2,3目前還不支持,在計(jì)劃中)的情況下,會(huì)判斷請(qǐng)求是否以PRI * HTTP/2為開始來判斷是否http2
    Http::ServerConnectionPtr
    HttpConnectionManagerConfig::createCodec(Network::Connection& connection,
                                             const Buffer::Instance& data,
                                             Http::ServerConnectionCallbacks& callbacks) {
      switch (codec_type_) {
      case CodecType::HTTP1: {
        return std::make_unique<Http::Http1::ServerConnectionImpl>(
            connection, Http::Http1::CodecStats::atomicGet(http1_codec_stats_, context_.scope()),
            callbacks, http1_settings_, maxRequestHeadersKb(), maxRequestHeadersCount(),
            headersWithUnderscoresAction());
      }
      case CodecType::HTTP2: {
        return std::make_unique<Http::Http2::ServerConnectionImpl>(
            connection, callbacks,
            Http::Http2::CodecStats::atomicGet(http2_codec_stats_, context_.scope()),
            context_.api().randomGenerator(), http2_options_, maxRequestHeadersKb(),
            maxRequestHeadersCount(), headersWithUnderscoresAction());
      }
      case CodecType::HTTP3:
    #ifdef ENVOY_ENABLE_QUIC
        return std::make_unique<Quic::QuicHttpServerConnectionImpl>(
            dynamic_cast<Quic::EnvoyQuicServerSession&>(connection), callbacks,
            Http::Http3::CodecStats::atomicGet(http3_codec_stats_, context_.scope()), http3_options_,
            maxRequestHeadersKb(), headersWithUnderscoresAction());
    #else
        // Should be blocked by configuration checking at an earlier point.
        NOT_REACHED_GCOVR_EXCL_LINE;
    #endif
      case CodecType::AUTO:
        return Http::ConnectionManagerUtility::autoCreateCodec(
            connection, data, callbacks, context_.scope(), context_.api().randomGenerator(),
            http1_codec_stats_, http2_codec_stats_, http1_settings_, http2_options_,
            maxRequestHeadersKb(), maxRequestHeadersCount(), headersWithUnderscoresAction());
      }
      NOT_REACHED_GCOVR_EXCL_LINE;
    }
     
     
    std::string ConnectionManagerUtility::determineNextProtocol(Network::Connection& connection,
                                                                const Buffer::Instance& data) {
      if (!connection.nextProtocol().empty()) {
        return connection.nextProtocol();
      }
     
      // See if the data we have so far shows the HTTP/2 prefix. We ignore the case where someone sends
      // us the first few bytes of the HTTP/2 prefix since in all public cases we use SSL/ALPN. For
      // internal cases this should practically never happen.
      if (data.startsWith(Http2::CLIENT_MAGIC_PREFIX)) {
        return Utility::AlpnNames::get().Http2;
      }
     
      return "";
    }
     
     
    const std::string CLIENT_MAGIC_PREFIX = "PRI * HTTP/2";
  1. 利用http_parser進(jìn)行http解析的callback,ConnectionImpl::settings_靜態(tài)初始化了parse各個(gè)階段的callbacks
    http_parser_settings ConnectionImpl::settings_{
        [](http_parser* parser) -> int {
          static_cast<ConnectionImpl*>(parser->data)->onMessageBeginBase();
          return 0;
        },
        [](http_parser* parser, const char* at, size_t length) -> int {
          static_cast<ConnectionImpl*>(parser->data)->onUrl(at, length);
          return 0;
        },
        nullptr, // on_status
        [](http_parser* parser, const char* at, size_t length) -> int {
          static_cast<ConnectionImpl*>(parser->data)->onHeaderField(at, length);
          return 0;
        },
        [](http_parser* parser, const char* at, size_t length) -> int {
          static_cast<ConnectionImpl*>(parser->data)->onHeaderValue(at, length);
          return 0;
        },
        [](http_parser* parser) -> int {
          return static_cast<ConnectionImpl*>(parser->data)->onHeadersCompleteBase();
        },
        [](http_parser* parser, const char* at, size_t length) -> int {
          static_cast<ConnectionImpl*>(parser->data)->onBody(at, length);
          return 0;
        },
        [](http_parser* parser) -> int {
          static_cast<ConnectionImpl*>(parser->data)->onMessageCompleteBase();
          return 0;
        },
        nullptr, // on_chunk_header
        nullptr  // on_chunk_complete
    };

envoy社區(qū)有討論會(huì)將協(xié)議解析器從http_parser換成llhttp

Istio實(shí)戰(zhàn)(十一)-Envoy 請(qǐng)求解析(下),Istio實(shí)戰(zhàn)教程,istio,網(wǎng)絡(luò),云原生

    if (pos != absl::string_view::npos) {
          // Include \r or \n
          new_data = new_data.substr(0, pos + 1);
          ssize_t rc = http_parser_execute(&parser_, &settings_, new_data.data(), new_data.length());
          ENVOY_LOG(trace, "http inspector: http_parser parsed {} chars, error code: {}", rc,
                    HTTP_PARSER_ERRNO(&parser_));
     
          // Errors in parsing HTTP.
          if (HTTP_PARSER_ERRNO(&parser_) != HPE_OK && HTTP_PARSER_ERRNO(&parser_) != HPE_PAUSED) {
            return ParseState::Error;
          }
     
          if (parser_.http_major == 1 && parser_.http_minor == 1) {
            protocol_ = Http::Headers::get().ProtocolStrings.Http11String;
          } else {
            // Set other HTTP protocols to HTTP/1.0
            protocol_ = Http::Headers::get().ProtocolStrings.Http10String;
          }
          return ParseState::Done;
        } else {
          ssize_t rc = http_parser_execute(&parser_, &settings_, new_data.data(), new_data.length());
          ENVOY_LOG(trace, "http inspector: http_parser parsed {} chars, error code: {}", rc,
                    HTTP_PARSER_ERRNO(&parser_));
     
          // Errors in parsing HTTP.
          if (HTTP_PARSER_ERRNO(&parser_) != HPE_OK && HTTP_PARSER_ERRNO(&parser_) != HPE_PAUSED) {
            return ParseState::Error;
          } else {
            return ParseState::Continue;
          }
     
     
     
     
        return {http_parser_execute(&parser_, &settings_, slice, len), HTTP_PARSER_ERRNO(&parser_)};
  1. onMessageBeginBase
     current_header_map_ = std::make_unique<HeaderMapImpl>();
      header_parsing_state_ = HeaderParsingState::Field;
     
     
     
    Status ConnectionImpl::onMessageBegin() {
      ENVOY_CONN_LOG(trace, "message begin", connection_);
      // Make sure that if HTTP/1.0 and HTTP/1.1 requests share a connection Envoy correctly sets
      // protocol for each request. Envoy defaults to 1.1 but sets the protocol to 1.0 where applicable
      // in onHeadersCompleteBase
      protocol_ = Protocol::Http11;
      processing_trailers_ = false;
      header_parsing_state_ = HeaderParsingState::Field;
      allocHeaders(statefulFormatterFromSettings(codec_settings_));
      return onMessageBeginBase();
    }
     
    Status ServerConnectionImpl::onMessageBeginBase() {
      if (!resetStreamCalled()) {
        ASSERT(!active_request_.has_value());
        active_request_.emplace(*this);
        auto& active_request = active_request_.value();
        if (resetStreamCalled()) {
          return codecClientError("cannot create new streams after calling reset");
        }
        active_request.request_decoder_ = &callbacks_.newStream(active_request.response_encoder_);
     
        // Check for pipelined request flood as we prepare to accept a new request.
        // Parse errors that happen prior to onMessageBegin result in stream termination, it is not
        // possible to overflow output buffers with early parse errors.
        RETURN_IF_ERROR(doFloodProtectionChecks());
      }
      return okStatus();
    }
    
  • 創(chuàng)建ActiveStream, 保存downstream的信息,和對(duì)應(yīng)的route信息
  • 對(duì)于https,會(huì)把TLS握手的時(shí)候保存的SNI寫入ActiveStream.requested_server_name_
       void setRequestedServerName(absl::string_view requested_server_name) override {
           requested_server_name_ = std::string(requested_server_name);
         }
        
       void Filter::onServername(absl::string_view name) {
         if (!name.empty()) {
           config_->stats().sni_found_.inc();
           cb_->socket().setRequestedServerName(name);
           ENVOY_LOG(debug, "tls:onServerName(), requestedServerName: {}", name);
         } else {
           config_->stats().sni_not_found_.inc();
         }
         clienthello_success_ = true;
       }
  1. onHeaderField,onHeaderValue?迭代添加header到current_header_map_
  2. 解析完最后一個(gè)請(qǐng)求頭后會(huì)執(zhí)行?onHeadersComplete?把request中的一些字段(method, path, host )加入headers中
const Http::HeaderValues& header_values = Http::Headers::get();
active_request.response_encoder_.setIsResponseToHeadRequest(parser_->methodName() ==
                                                            header_values.MethodValues.Head);
active_request.response_encoder_.setIsResponseToConnectRequest(
    parser_->methodName() == header_values.MethodValues.Connect);
 
RETURN_IF_ERROR(handlePath(*headers, parser_->methodName()));
ASSERT(active_request.request_url_.empty());
 
headers->setMethod(parser_->methodName());
headers->setScheme("http"); 
  1. 回調(diào)?onHeadersComplete, 依次回調(diào)onMessageComplete,onMessageCompleteBase,ServerConnectionImpl::onMessageComplete
  • 這個(gè)請(qǐng)求解碼是Envoy上下文的,它會(huì)執(zhí)行Envoy的核心代理邏輯 —— 遍歷HTTP過濾器鏈、進(jìn)行路由選擇
  • 此過濾器當(dāng)中判斷請(qǐng)求過載
  • 通過route上的cluster name從ThreadLocalClusterManager中查找cluster, 緩存在cached_cluster_info_
  • 根據(jù)配置構(gòu)造在route上的filterChain (具體的filter實(shí)現(xiàn)是通過registerFactory方法注冊(cè)進(jìn)去,在createFilterChain的時(shí)候根據(jù)名稱構(gòu)造,比如istio-proxy的stats)
  • 如果對(duì)應(yīng)http connection manager上有trace配置
        if (connection_manager_.config_.tracingConfig()) {
          traceRequest();
        }
  • request header中有trace,就創(chuàng)建子span, sampled跟隨parent span
  • 如果header中沒有trace,就創(chuàng)建root span, 并設(shè)置sampled
        void ConnectionManagerImpl::ActiveStream::refreshCachedRoute(const Router::RouteCallback& cb) {
          Router::RouteConstSharedPtr route;
          if (request_headers_ != nullptr) {
            if (connection_manager_.config_.isRoutable() &&
                connection_manager_.config_.scopedRouteConfigProvider() != nullptr) {
              // NOTE: re-select scope as well in case the scope key header has been changed by a filter.
              snapScopedRouteConfig();
            }
            if (snapped_route_config_ != nullptr) {
              route = snapped_route_config_->route(cb, *request_headers_, filter_manager_.streamInfo(),
                                                   stream_id_);
            }
          }
         
          setRoute(route);
        }
         
        void ConnectionManagerImpl::ActiveStream::decodeHeaders(RequestHeaderMapPtr&& headers,
                                                                bool end_stream) {
        ScopeTrackerScopeState scope(this,
                                       connection_manager_.read_callbacks_->connection().dispatcher());
          request_headers_ = std::move(headers);
          filter_manager_.requestHeadersInitialized();
          if (request_header_timer_ != nullptr) {
            request_header_timer_->disableTimer();
            request_header_timer_.reset();
          }
         
          Upstream::HostDescriptionConstSharedPtr upstream_host =
              connection_manager_.read_callbacks_->upstreamHost();
         
          if (upstream_host != nullptr) {
            Upstream::ClusterRequestResponseSizeStatsOptRef req_resp_stats =
                upstream_host->cluster().requestResponseSizeStats();
            if (req_resp_stats.has_value()) {
              req_resp_stats->get().upstream_rq_headers_size_.recordValue(request_headers_->byteSize());
            }
          }
         
          // Both saw_connection_close_ and is_head_request_ affect local replies: set
          // them as early as possible.
          const Protocol protocol = connection_manager_.codec_->protocol();
          state_.saw_connection_close_ = HeaderUtility::shouldCloseConnection(protocol, *request_headers_);
         
          // We need to snap snapped_route_config_ here as it's used in mutateRequestHeaders later.
          if (connection_manager_.config_.isRoutable()) {
            if (connection_manager_.config_.routeConfigProvider() != nullptr) {
              snapped_route_config_ = connection_manager_.config_.routeConfigProvider()->config();
            } else if (connection_manager_.config_.scopedRouteConfigProvider() != nullptr) {
              snapped_scoped_routes_config_ =
                  connection_manager_.config_.scopedRouteConfigProvider()->config<Router::ScopedConfig>();
              snapScopedRouteConfig();
            }
          } else {
            snapped_route_config_ = connection_manager_.config_.routeConfigProvider()->config();
          }
         
          ENVOY_STREAM_LOG(debug, "request headers complete (end_stream={}):\n{}", *this, end_stream,
                           *request_headers_);
         
          // We end the decode here only if the request is header only. If we convert the request to a
          // header only, the stream will be marked as done once a subsequent decodeData/decodeTrailers is
          // called with end_stream=true.
          filter_manager_.maybeEndDecode(end_stream);
         
          // Drop new requests when overloaded as soon as we have decoded the headers.
          if (connection_manager_.random_generator_.bernoulli(
                  connection_manager_.overload_stop_accepting_requests_ref_.value())) {
            // In this one special case, do not create the filter chain. If there is a risk of memory
            // overload it is more important to avoid unnecessary allocation than to create the filters.
            filter_manager_.skipFilterChainCreation();
            connection_manager_.stats_.named_.downstream_rq_overload_close_.inc();
            sendLocalReply(Grpc::Common::isGrpcRequestHeaders(*request_headers_),
                           Http::Code::ServiceUnavailable, "envoy overloaded", nullptr, absl::nullopt,
                           StreamInfo::ResponseCodeDetails::get().Overload);
            return;
          }
         
          if (!connection_manager_.config_.proxy100Continue() && request_headers_->Expect() &&
              request_headers_->Expect()->value() == Headers::get().ExpectValues._100Continue.c_str()) {
            // Note in the case Envoy is handling 100-Continue complexity, it skips the filter chain
            // and sends the 100-Continue directly to the encoder.
            chargeStats(continueHeader());
            response_encoder_->encode100ContinueHeaders(continueHeader());
            // Remove the Expect header so it won't be handled again upstream.
            request_headers_->removeExpect();
          }
         
          connection_manager_.user_agent_.initializeFromHeaders(*request_headers_,
                                                                connection_manager_.stats_.prefixStatName(),
                                                                connection_manager_.stats_.scope_);
         
          // Make sure we are getting a codec version we support.
          if (protocol == Protocol::Http10) {
            // Assume this is HTTP/1.0. This is fine for HTTP/0.9 but this code will also affect any
            // requests with non-standard version numbers (0.9, 1.3), basically anything which is not
            // HTTP/1.1.
            //
            // The protocol may have shifted in the HTTP/1.0 case so reset it.
            filter_manager_.streamInfo().protocol(protocol);
            if (!connection_manager_.config_.http1Settings().accept_http_10_) {
              // Send "Upgrade Required" if HTTP/1.0 support is not explicitly configured on.
              sendLocalReply(false, Code::UpgradeRequired, "", nullptr, absl::nullopt,
                             StreamInfo::ResponseCodeDetails::get().LowVersion);
              return;
            }
            if (!request_headers_->Host() &&
                !connection_manager_.config_.http1Settings().default_host_for_http_10_.empty()) {
              // Add a default host if configured to do so.
              request_headers_->setHost(
                  connection_manager_.config_.http1Settings().default_host_for_http_10_);
            }
          }
         
          if (!request_headers_->Host()) {
            // Require host header. For HTTP/1.1 Host has already been translated to :authority.
            sendLocalReply(Grpc::Common::hasGrpcContentType(*request_headers_), Code::BadRequest, "",
                           nullptr, absl::nullopt, StreamInfo::ResponseCodeDetails::get().MissingHost);
            return;
          }
         
          // Verify header sanity checks which should have been performed by the codec.
          ASSERT(HeaderUtility::requestHeadersValid(*request_headers_).has_value() == false);
         
          // Check for the existence of the :path header for non-CONNECT requests, or present-but-empty
          // :path header for CONNECT requests. We expect the codec to have broken the path into pieces if
          // applicable. NOTE: Currently the HTTP/1.1 codec only does this when the allow_absolute_url flag
          // is enabled on the HCM.
          if ((!HeaderUtility::isConnect(*request_headers_) || request_headers_->Path()) &&
              request_headers_->getPathValue().empty()) {
            sendLocalReply(Grpc::Common::hasGrpcContentType(*request_headers_), Code::NotFound, "", nullptr,
                           absl::nullopt, StreamInfo::ResponseCodeDetails::get().MissingPath);
            return;
          }
         
          // Currently we only support relative paths at the application layer.
          if (!request_headers_->getPathValue().empty() && request_headers_->getPathValue()[0] != '/') {
            connection_manager_.stats_.named_.downstream_rq_non_relative_path_.inc();
            sendLocalReply(Grpc::Common::hasGrpcContentType(*request_headers_), Code::NotFound, "", nullptr,
                           absl::nullopt, StreamInfo::ResponseCodeDetails::get().AbsolutePath);
            return;
          }
         
          // Path sanitization should happen before any path access other than the above sanity check.
          const auto action =
              ConnectionManagerUtility::maybeNormalizePath(*request_headers_, connection_manager_.config_);
          // gRPC requests are rejected if Envoy is configured to redirect post-normalization. This is
          // because gRPC clients do not support redirect.
          if (action == ConnectionManagerUtility::NormalizePathAction::Reject ||
              (action == ConnectionManagerUtility::NormalizePathAction::Redirect &&
               Grpc::Common::hasGrpcContentType(*request_headers_))) {
            connection_manager_.stats_.named_.downstream_rq_failed_path_normalization_.inc();
            sendLocalReply(Grpc::Common::hasGrpcContentType(*request_headers_), Code::BadRequest, "",
                           nullptr, absl::nullopt,
                           StreamInfo::ResponseCodeDetails::get().PathNormalizationFailed);
            return;
          } else if (action == ConnectionManagerUtility::NormalizePathAction::Redirect) {
            connection_manager_.stats_.named_.downstream_rq_redirected_with_normalized_path_.inc();
            sendLocalReply(
                false, Code::TemporaryRedirect, "",
                [new_path = request_headers_->Path()->value().getStringView()](
                    Http::ResponseHeaderMap& response_headers) -> void {
                  response_headers.addReferenceKey(Http::Headers::get().Location, new_path);
                },
                absl::nullopt, StreamInfo::ResponseCodeDetails::get().PathNormalizationFailed);
            return;
          }
         
          ASSERT(action == ConnectionManagerUtility::NormalizePathAction::Continue);
          ConnectionManagerUtility::maybeNormalizeHost(*request_headers_, connection_manager_.config_,
                                                       localPort());
         
          if (!state_.is_internally_created_) { // Only sanitize headers on first pass.
            // Modify the downstream remote address depending on configuration and headers.
            filter_manager_.setDownstreamRemoteAddress(ConnectionManagerUtility::mutateRequestHeaders(
                *request_headers_, connection_manager_.read_callbacks_->connection(),
                connection_manager_.config_, *snapped_route_config_, connection_manager_.local_info_));
          }
          ASSERT(filter_manager_.streamInfo().downstreamAddressProvider().remoteAddress() != nullptr);
         
          ASSERT(!cached_route_);
          refreshCachedRoute();
         
          if (!state_.is_internally_created_) { // Only mutate tracing headers on first pass.
            filter_manager_.streamInfo().setTraceReason(
                ConnectionManagerUtility::mutateTracingRequestHeader(
                    *request_headers_, connection_manager_.runtime_, connection_manager_.config_,
                    cached_route_.value().get()));
          }
         
          filter_manager_.streamInfo().setRequestHeaders(*request_headers_);
         
          const bool upgrade_rejected = filter_manager_.createFilterChain() == false;
         
          // TODO if there are no filters when starting a filter iteration, the connection manager
          // should return 404. The current returns no response if there is no router filter.
          if (hasCachedRoute()) {
            // Do not allow upgrades if the route does not support it.
            if (upgrade_rejected) {
              // While downstream servers should not send upgrade payload without the upgrade being
              // accepted, err on the side of caution and refuse to process any further requests on this
              // connection, to avoid a class of HTTP/1.1 smuggling bugs where Upgrade or CONNECT payload
              // contains a smuggled HTTP request.
              state_.saw_connection_close_ = true;
              connection_manager_.stats_.named_.downstream_rq_ws_on_non_ws_route_.inc();
              sendLocalReply(Grpc::Common::hasGrpcContentType(*request_headers_), Code::Forbidden, "",
                             nullptr, absl::nullopt, StreamInfo::ResponseCodeDetails::get().UpgradeFailed);
              return;
            }
            // Allow non websocket requests to go through websocket enabled routes.
          }
         
          if (hasCachedRoute()) {
            const Router::RouteEntry* route_entry = cached_route_.value()->routeEntry();
            if (route_entry != nullptr && route_entry->idleTimeout()) {
              // TODO(mattklein123): Technically if the cached route changes, we should also see if the
              // route idle timeout has changed and update the value.
              idle_timeout_ms_ = route_entry->idleTimeout().value();
              response_encoder_->getStream().setFlushTimeout(idle_timeout_ms_);
              if (idle_timeout_ms_.count()) {
                // If we have a route-level idle timeout but no global stream idle timeout, create a timer.
                if (stream_idle_timer_ == nullptr) {
                  stream_idle_timer_ =
                      connection_manager_.read_callbacks_->connection().dispatcher().createScaledTimer(
                          Event::ScaledTimerType::HttpDownstreamIdleStreamTimeout,
                          [this]() -> void { onIdleTimeout(); });
                }
              } else if (stream_idle_timer_ != nullptr) {
                // If we had a global stream idle timeout but the route-level idle timeout is set to zero
                // (to override), we disable the idle timer.
                stream_idle_timer_->disableTimer();
                stream_idle_timer_ = nullptr;
              }
            }
          }
         
          // Check if tracing is enabled at all.
          if (connection_manager_.config_.tracingConfig()) {
            traceRequest();
          }
         
          filter_manager_.decodeHeaders(*request_headers_, end_stream);
         
          // Reset it here for both global and overridden cases.
          resetIdleTimer();
        }
         
         
        void FilterManager::decodeHeaders(ActiveStreamDecoderFilter* filter, RequestHeaderMap& headers,
                                          bool end_stream) {
          // Headers filter iteration should always start with the next filter if available.
          std::list<ActiveStreamDecoderFilterPtr>::iterator entry =
              commonDecodePrefix(filter, FilterIterationStartState::AlwaysStartFromNext);
          std::list<ActiveStreamDecoderFilterPtr>::iterator continue_data_entry = decoder_filters_.end();
         
          for (; entry != decoder_filters_.end(); entry++) {
            (*entry)->maybeEvaluateMatchTreeWithNewData(
                [&](auto& matching_data) { matching_data.onRequestHeaders(headers); });
         
            if ((*entry)->skipFilter()) {
              continue;
            }
  1. 根據(jù)http connection manager上配置的filters (envoy.cors,envoy.fault,envoy.router),一個(gè)個(gè)執(zhí)行decodeHeaders

這里主要寫一下和envoy.router

  1. envoy.router
  • 在構(gòu)造RouteMatcher的時(shí)候會(huì)遍歷virtual_hosts下的domains,并根據(jù)通配符的位置和domain的長(zhǎng)度分為4個(gè)map<domain_len, std::unordered_map<domain, virtualHost>, std::greater<int64_t>>
    • default_virtual_host_`domain就是一個(gè)通配符(只允許存在一個(gè))
    • wildcard_virtual_host_suffixes_domain中通配符在開頭
    • wildcard_virtual_host_prefixes_domain中通配符在結(jié)尾
    • virtual_hosts_不包含通配
                    RouteMatcher::RouteMatcher(const envoy::config::route::v3::RouteConfiguration& route_config,
                                               const ConfigImpl& global_route_config,
                                               Server::Configuration::ServerFactoryContext& factory_context,
                                               ProtobufMessage::ValidationVisitor& validator, bool validate_clusters)
                        : vhost_scope_(factory_context.scope().scopeFromStatName(
                              factory_context.routerContext().virtualClusterStatNames().vhost_)) {
                      absl::optional<Upstream::ClusterManager::ClusterInfoMaps> validation_clusters;
                      if (validate_clusters) {
                        validation_clusters = factory_context.clusterManager().clusters();
                      }
                      for (const auto& virtual_host_config : route_config.virtual_hosts()) {
                        VirtualHostSharedPtr virtual_host(new VirtualHostImpl(virtual_host_config, global_route_config,
                                                                              factory_context, *vhost_scope_, validator,
                                                                              validation_clusters));
                        for (const std::string& domain_name : virtual_host_config.domains()) {
                          const std::string domain = Http::LowerCaseString(domain_name).get();
                          bool duplicate_found = false;
                          if ("*" == domain) {
                            if (default_virtual_host_) {
                              throw EnvoyException(fmt::format("Only a single wildcard domain is permitted in route {}",
                                                               route_config.name()));
                            }
                            default_virtual_host_ = virtual_host;
                          } else if (!domain.empty() && '*' == domain[0]) {
                            duplicate_found = !wildcard_virtual_host_suffixes_[domain.size() - 1]
                                                   .emplace(domain.substr(1), virtual_host)
                                                   .second;
                          } else if (!domain.empty() && '*' == domain[domain.size() - 1]) {
                            duplicate_found = !wildcard_virtual_host_prefixes_[domain.size() - 1]
                                                   .emplace(domain.substr(0, domain.size() - 1), virtual_host)
                                                   .second;
                          } else {
                            duplicate_found = !virtual_hosts_.emplace(domain, virtual_host).second;
                          }
                          if (duplicate_found) {
                            throw EnvoyException(fmt::format("Only unique values for domains are permitted. Duplicate "
                                                             "entry of domain {} in route {}",
                                                             domain, route_config.name()));
                          }
                        }
                      }
                    }
  • 按照virtual_hosts_=>wildcard_virtual_host_suffixes_=>wildcard_virtual_host_prefixes_=>default_virtual_host_的順序查找

同時(shí)按照map的迭代順序(domain len降序)查找最先除去通配符后能匹配到的virtualhost,如果沒有直接返回 404

                const VirtualHostImpl* RouteMatcher::findVirtualHost(const Http::RequestHeaderMap& headers) const {
                  // Fast path the case where we only have a default virtual host.
                  if (virtual_hosts_.empty() && wildcard_virtual_host_suffixes_.empty() &&
                      wildcard_virtual_host_prefixes_.empty()) {
                    return default_virtual_host_.get();
                  }
                 
                  // There may be no authority in early reply paths in the HTTP connection manager.
                  if (headers.Host() == nullptr) {
                    return nullptr;
                  }
                 
                  // TODO (@rshriram) Match Origin header in WebSocket
                  // request with VHost, using wildcard match
                  // Lower-case the value of the host header, as hostnames are case insensitive.
                  const std::string host = absl::AsciiStrToLower(headers.getHostValue());
                  const auto& iter = virtual_hosts_.find(host);
                  if (iter != virtual_hosts_.end()) {
                    return iter->second.get();
                  }
                  if (!wildcard_virtual_host_suffixes_.empty()) {
                    const VirtualHostImpl* vhost = findWildcardVirtualHost(
                        host, wildcard_virtual_host_suffixes_,
                        [](const std::string& h, int l) -> std::string { return h.substr(h.size() - l); });
                    if (vhost != nullptr) {
                      return vhost;
                    }
                  }
                  if (!wildcard_virtual_host_prefixes_.empty()) {
                    const VirtualHostImpl* vhost = findWildcardVirtualHost(
                        host, wildcard_virtual_host_prefixes_,
                        [](const std::string& h, int l) -> std::string { return h.substr(0, l); });
                    if (vhost != nullptr) {
                      return vhost;
                    }
                  }
                  return default_virtual_host_.get();
                }
  • 在一個(gè)virtualhost上查找對(duì)應(yīng)route和cluster
    • 在通過domain匹配到virtualhost,會(huì)在那個(gè)virtualhost上匹配查找cluster,如果沒匹配上,會(huì)直接返回404
    • match可以根據(jù)配置分為prefix,regex,path三種route進(jìn)行匹配
    • 如果存在weighted_clusters,會(huì)根據(jù)stream_id, 和clusters的weight進(jìn)行分發(fā),stream_id本身是每個(gè)請(qǐng)求獨(dú)立隨機(jī)生成,所以weighted_clusters的權(quán)重分發(fā)可以視為隨機(jī)分發(fā)
  • 沒有route能匹配請(qǐng)求,返回 404no cluster match for URL
  • 有配置directResponseEntry,直接返回
  • route上的clustername在clustermanager上找不到對(duì)應(yīng)cluster,返回配置的clusterNotFoundResponseCode
  • 當(dāng)前處于maintenanceMode (和主動(dòng)健康檢查相關(guān))
                // See if we are supposed to immediately kill some percentage of this cluster's traffic.
                  if (cluster_->maintenanceMode()) {
                    callbacks_->streamInfo().setResponseFlag(StreamInfo::ResponseFlag::UpstreamOverflow);
                    chargeUpstreamCode(Http::Code::ServiceUnavailable, nullptr, true);
                    callbacks_->sendLocalReply(
                        Http::Code::ServiceUnavailable, "maintenance mode",
                        [modify_headers, this](Http::ResponseHeaderMap& headers) {
                          if (!config_.suppress_envoy_headers_) {
                            headers.addReference(Http::Headers::get().EnvoyOverloaded,
                                                 Http::Headers::get().EnvoyOverloadedValues.True);
                          }
                          // Note: append_cluster_info does not respect suppress_envoy_headers.
                          modify_headers(headers);
                        },
                        absl::nullopt, StreamInfo::ResponseCodeDetails::get().MaintenanceMode);
                    cluster_->stats().upstream_rq_maintenance_mode_.inc();
                    return Http::FilterHeadersStatus::StopIteration;                                                                                                                                                       
                  }
  • 調(diào)用createConnPool獲取upstream conn pool
                    std::unique_ptr<GenericConnPool> generic_conn_pool = createConnPool(*cluster);
                     
                    if (!generic_conn_pool) {
                      sendNoHealthyUpstreamResponse();
                      return Http::FilterHeadersStatus::StopIteration;
                    }
  • 根據(jù) cluster上的features配置和USE_DOWNSTREAM_PROTOCOL來確定使用http1還是http2協(xié)議向上游發(fā)送請(qǐng)求
                    std::vector<Http::Protocol>
                    ClusterInfoImpl::upstreamHttpProtocol(absl::optional<Http::Protocol> downstream_protocol) const {
                      if (downstream_protocol.has_value() &&
                          features_ & Upstream::ClusterInfo::Features::USE_DOWNSTREAM_PROTOCOL) {
                        return {downstream_protocol.value()};
                      } else if (features_ & Upstream::ClusterInfo::Features::USE_ALPN) {
                        ASSERT(!(features_ & Upstream::ClusterInfo::Features::HTTP3));
                        return {Http::Protocol::Http2, Http::Protocol::Http11};
                      } else {
                        if (features_ & Upstream::ClusterInfo::Features::HTTP3) {
                          return {Http::Protocol::Http3};
                        }
                        return {(features_ & Upstream::ClusterInfo::Features::HTTP2) ? Http::Protocol::Http2
                                                                                     : Http::Protocol::Http11};
                      }
                    }
  • ThreadLocalClusterManager上根據(jù)cluster name查詢cluster
                    Http::ConnectionPool::Instance*
                    ClusterManagerImpl::ThreadLocalClusterManagerImpl::ClusterEntry::connPool(
                        ResourcePriority priority, absl::optional<Http::Protocol> downstream_protocol,
                        LoadBalancerContext* context, bool peek) {
                      HostConstSharedPtr host = (peek ? lb_->peekAnotherHost(context) : lb_->chooseHost(context));
                      if (!host) {
                        if (!peek) {
                          ENVOY_LOG(debug, "no healthy host for HTTP connection pool");
                          cluster_info_->stats().upstream_cx_none_healthy_.inc();
                        }
                        return nullptr;
                      }
  • 根據(jù)loadbalancer算法挑選節(jié)點(diǎn)(此處worker之間的負(fù)載均衡根據(jù)不同的負(fù)載均衡算法有的是獨(dú)立的,比如round robin,只有同一個(gè)Worker上的才是嚴(yán)格的順序)
                    HostConstSharedPtr LoadBalancerBase::chooseHost(LoadBalancerContext* context) {
                      HostConstSharedPtr host;
                      const size_t max_attempts = context ? context->hostSelectionRetryCount() + 1 : 1;
                      for (size_t i = 0; i < max_attempts; ++i) {
                        host = chooseHostOnce(context);
                     
                        // If host selection failed or the host is accepted by the filter, return.
                        // Otherwise, try again.
                        // Note: in the future we might want to allow retrying when chooseHostOnce returns nullptr.
                        if (!host || !context || !context->shouldSelectAnotherHost(*host)) {
                          return host;
                        }
                      }
                     
                      // If we didn't find anything, return the last host.
                      return host;
                    }
  • 根據(jù)節(jié)點(diǎn)和協(xié)議拿到連接池 (連接池由ThreadLocalClusterManager管理,各個(gè)Worker不共享)
  • 沒有做直接503,中止解析鏈
                     std::unique_ptr<GenericConnPool> generic_conn_pool = createConnPool(*cluster);
                      
                     if (!generic_conn_pool) {
                       sendNoHealthyUpstreamResponse();
                       return Http::FilterHeadersStatus::StopIteration;
                     }
  • 根據(jù)配置(timeout, perTryTimeout)確定本次請(qǐng)求的timeout
                timeout_ = FilterUtility::finalTimeout(*route_entry_, headers, !config_.suppress_envoy_headers_,
                                                         grpc_request_, hedging_params_.hedge_on_per_try_timeout_,
                                                         config_.respect_expected_rq_timeout_);
                 
                imeoutData timeout;
                  if (!route.usingNewTimeouts()) {
                    if (grpc_request && route.maxGrpcTimeout()) {
                      const std::chrono::milliseconds max_grpc_timeout = route.maxGrpcTimeout().value();
                      auto header_timeout = Grpc::Common::getGrpcTimeout(request_headers);
                      std::chrono::milliseconds grpc_timeout =
                          header_timeout ? header_timeout.value() : std::chrono::milliseconds(0);
                      if (route.grpcTimeoutOffset()) {
                        // We only apply the offset if it won't result in grpc_timeout hitting 0 or below, as
                        // setting it to 0 means infinity and a negative timeout makes no sense.
                        const auto offset = *route.grpcTimeoutOffset();
                        if (offset < grpc_timeout) {
                          grpc_timeout -= offset;
                        }
                      }
                 
                      // Cap gRPC timeout to the configured maximum considering that 0 means infinity.
                      if (max_grpc_timeout != std::chrono::milliseconds(0) &&
                          (grpc_timeout == std::chrono::milliseconds(0) || grpc_timeout > max_grpc_timeout)) {
                        grpc_timeout = max_grpc_timeout;
                      }
                      timeout.global_timeout_ = grpc_timeout;
                    } else {
                      timeout.global_timeout_ = route.timeout();
                    }
                  }
                  timeout.per_try_timeout_ = route.retryPolicy().perTryTimeout();

  • 把之前生成的trace寫入request header
  • 對(duì)request做一些最終的修改,headers_to_remove``headers_to_add``host_rewrite``rewritePathHeader(路由的配置)
                 route_entry_->finalizeRequestHeaders(headers, callbacks_->streamInfo(),
                                                      !config_.suppress_envoy_headers_);
  • 構(gòu)造 retry和shadowing的對(duì)象
                 retry_state_ = createRetryState(
                       route_entry_->retryPolicy(), headers, *cluster_, request_vcluster_, config_.runtime_,
                       config_.random_, callbacks_->dispatcher(), config_.timeSource(), route_entry_->priority());
                  
                   // Determine which shadow policies to use. It's possible that we don't do any shadowing due to
                   // runtime keys.
                   for (const auto& shadow_policy : route_entry_->shadowPolicies()) {
                     const auto& policy_ref = *shadow_policy;
                     if (FilterUtility::shouldShadow(policy_ref, config_.runtime_, callbacks_->streamId())) {
                       active_shadow_policies_.push_back(std::cref(policy_ref));
                     }
                   }

發(fā)送請(qǐng)求

發(fā)送請(qǐng)求部分也是在envoy.router中的邏輯

  1. 查看當(dāng)前conn pool是否有空閑client
        if (!ready_clients_.empty()) {
           ActiveClient& client = *ready_clients_.front();
           ENVOY_CONN_LOG(debug, "using existing connection", client);
           attachStreamToClient(client, context);
              // Even if there's a ready client, we may want to preconnect to handle the next incoming stream.
           tryCreateNewConnections();

如果存在空閑連接

  • 根據(jù)downstream request和tracing等配置構(gòu)造發(fā)往upstream的請(qǐng)求buffer
  • 把buffer一次性移入write_buffer_, 立即觸發(fā)Write Event
  • ConnectionImpl::onWriteReady隨后會(huì)被觸發(fā)
  • write_ buffer_的內(nèi)容寫入socket發(fā)送出去

如果不存在空閑連接文章來源地址http://www.zghlxwxcb.cn/news/detail-737412.html

            if (host_->cluster().resourceManager(priority_).pendingRequests().canCreate()) {
                ConnectionPool::Cancellable* pending = newPendingStream(context);
                ENVOY_LOG(debug, "trying to create new connection");
                ENVOY_LOG(trace, fmt::format("{}", *this));
             
                auto old_capacity = connecting_stream_capacity_;
                // This must come after newPendingStream() because this function uses the
                // length of pending_streams_ to determine if a new connection is needed.
                const ConnectionResult result = tryCreateNewConnections();
                // If there is not enough connecting capacity, the only reason to not
                // increase capacity is if the connection limits are exceeded.
                ENVOY_BUG(pending_streams_.size() <= connecting_stream_capacity_ ||
                              connecting_stream_capacity_ > old_capacity ||
                              result == ConnectionResult::NoConnectionRateLimited,
                          fmt::format("Failed to create expected connection: {}", *this));
                return pending;
              } else {
                ENVOY_LOG(debug, "max pending streams overflow");
                onPoolFailure(nullptr, absl::string_view(), ConnectionPool::PoolFailureReason::Overflow,
                              context);
                host_->cluster().stats().upstream_rq_pending_overflow_.inc();
                return nullptr;
              }
  • 根據(jù)max_pending_requestsmax_connections判斷是否可以創(chuàng)建新的連接(此處的指標(biāo)為worker間共享),但是每個(gè)線程會(huì)向上游最少建立一條連接,也就是極端策略可能需要和工作線程數(shù)相關(guān)
  • 根據(jù)配置設(shè)置新連接的socket options, 使用dispatcher.createClientConnection創(chuàng)建連接上游的連接,并綁定到eventloop
  • 新建PendingRequest并加到pending_requests_頭部
  • 當(dāng)連接成功建立的時(shí)候,會(huì)觸發(fā)ConnectionImpl::onFileEvent
  • onConnected的回調(diào)中
    • 停止connect_timer_
    • 復(fù)用存在空閑連接時(shí)的邏輯,發(fā)送請(qǐng)求
  1. onRequestComplete里調(diào)用maybeDoShadowing進(jìn)行流量復(fù)制
        ASSERT(!request->headers().getHostValue().empty());
          // Switch authority to add a shadow postfix. This allows upstream logging to make more sense.
          auto parts = StringUtil::splitToken(request->headers().getHostValue(), ":");
          ASSERT(!parts.empty() && parts.size() <= 2);
          request->headers().setHost(parts.size() == 2
                                         ? absl::StrJoin(parts, "-shadow:")
                                         : absl::StrCat(request->headers().getHostValue(), "-shadow"));
          // This is basically fire and forget. We don't handle cancelling.
          thread_local_cluster->httpAsyncClient().send(std::move(request), *this, options);
  • shadowing流量并不會(huì)返回錯(cuò)誤
  • shadowing 流量為asynclient發(fā)送,不會(huì)阻塞downstream,timeout也為global_timeout_
  • shadowing 會(huì)修改request header里的host 和 authority 添加-shadow后綴
  1. 根據(jù)global_timeout_啟動(dòng)響應(yīng)超時(shí)的定時(shí)器

接收響應(yīng)

  1. eventloop 觸發(fā)ClientConnectionImpl.ConnectionImpl上的onFileEvent的read ready事件
  2. 經(jīng)過http_parser execute后觸發(fā)onHeadersComplete后執(zhí)行到UpstreamRequest::decodeHeaders
  3. upstream_request_->upstream_host_->outlierDelector().putHttpResponseCode寫入status code,更新外部檢測(cè)的狀態(tài)
        external_origin_sr_monitor_.incTotalReqCounter();
          if (Http::CodeUtility::is5xx(response_code)) {
            std::shared_ptr<DetectorImpl> detector = detector_.lock();
            if (!detector) {
              // It's possible for the cluster/detector to go away while we still have a host in use.
              return;
            }
            if (Http::CodeUtility::isGatewayError(response_code)) {
              if (++consecutive_gateway_failure_ ==
                  detector->runtime().snapshot().getInteger(
                      ConsecutiveGatewayFailureRuntime, detector->config().consecutiveGatewayFailure())) {
                detector->onConsecutiveGatewayFailure(host_.lock());
              }
            } else {
              consecutive_gateway_failure_ = 0;
            }
         
            if (++consecutive_5xx_ == detector->runtime().snapshot().getInteger(
                                          Consecutive5xxRuntime, detector->config().consecutive5xx())) {
              detector->onConsecutive5xx(host_.lock());
            }
          } else {
            external_origin_sr_monitor_.incSuccessReqCounter();
            consecutive_5xx_ = 0;
            consecutive_gateway_failure_ = 0;
          }
        if (grpc_status.has_value()) {
          upstream_request.upstreamHost()->outlierDetector().putHttpResponseCode(grpc_to_http_status);
        } else {
          upstream_request.upstreamHost()->outlierDetector().putHttpResponseCode(response_code);
        }
  1. 根據(jù)返回結(jié)果、配置和retries_remaining_判斷是否應(yīng)該retry
  • 根據(jù)internal_redirect_action的配置和response來確定是否需要redirect到新的host
            InternalRedirectPolicyImpl RouteEntryImplBase::buildInternalRedirectPolicy(
                const envoy::config::route::v3::RouteAction& route_config,
                ProtobufMessage::ValidationVisitor& validator, absl::string_view current_route_name) const {
              if (route_config.has_internal_redirect_policy()) {
                return InternalRedirectPolicyImpl(route_config.internal_redirect_policy(), validator,
                                                  current_route_name);
              }
              envoy::config::route::v3::InternalRedirectPolicy policy_config;
              switch (route_config.internal_redirect_action()) {
              case envoy::config::route::v3::RouteAction::HANDLE_INTERNAL_REDIRECT:
                break;
              case envoy::config::route::v3::RouteAction::PASS_THROUGH_INTERNAL_REDIRECT:
                FALLTHRU;
              default:
                return InternalRedirectPolicyImpl();
              }
              if (route_config.has_max_internal_redirects()) {
                *policy_config.mutable_max_internal_redirects() = route_config.max_internal_redirects();
              }
              return InternalRedirectPolicyImpl(policy_config, validator, current_route_name);
            }
             
              if (num_internal_redirect.value() >= policy.maxInternalRedirects()) {
                config_.stats_.passthrough_internal_redirect_too_many_redirects_.inc();
                return false;
              }

返回響應(yīng)

  1. 停止request_timer, 重置idle_timer
  2. 和向upstream發(fā)送請(qǐng)求一樣的邏輯,發(fā)送響應(yīng)給downstream

閱讀源碼總結(jié)

  1. envoy當(dāng)中各種繼承,模板,組合使用的非常多,子類初始化時(shí)需要關(guān)注父類的構(gòu)造函數(shù)做了什么
  2. 可以根據(jù)請(qǐng)求日志的信息,通過日志的順序再到代碼走一遍大體過程
  3. 善用各種調(diào)試工具,例如抓包,gdb,放開指標(biāo)等,個(gè)人的經(jīng)驗(yàn) 百分之90的問題日志+抓包+部分源碼的閱讀可以解決

到了這里,關(guān)于Istio實(shí)戰(zhàn)(十一)-Envoy 請(qǐng)求解析(下)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 云原生之深入解析K8S Istio Gateway服務(wù)的架構(gòu)分析與實(shí)戰(zhàn)操作

    Istio 提供一種簡(jiǎn)單的方式來為已部署的服務(wù)建立網(wǎng)絡(luò),該網(wǎng)絡(luò)具有負(fù)載均衡、服務(wù)間認(rèn)證、監(jiān)控、網(wǎng)關(guān)等功能,而不需要對(duì)服務(wù)的代碼做任何改動(dòng)。 istio 適用于容器或虛擬機(jī)環(huán)境(特別是 k8s),兼容異構(gòu)架構(gòu); istio 使用 sidecar(邊車模式)代理服務(wù)的網(wǎng)絡(luò),不需要對(duì)業(yè)務(wù)代

    2024年02月13日
    瀏覽(99)
  • Istio實(shí)戰(zhàn):Istio & Kiali部署與驗(yàn)證

    Istio實(shí)戰(zhàn):Istio & Kiali部署與驗(yàn)證

    前幾天我就開始搗騰Istio。前幾天在執(zhí)行istioctl install --set profile=demo -y 的時(shí)候老是在第二步就報(bào)錯(cuò)了,開始我用的istio版本是1.6.8。 后面查看k8s與istio的版本對(duì)應(yīng)關(guān)系后發(fā)現(xiàn)我的k8s是1.20.0,于是我將istio升級(jí)到了1.13.4,在執(zhí)行istioctl install --set profile=demo -y 的時(shí)候還是同樣的問題,

    2024年02月22日
    瀏覽(19)
  • Istio實(shí)戰(zhàn)(十二)-Istio 延長(zhǎng)自簽發(fā)證書的有效期

    ????????因?yàn)闅v史原因,Istio 的自簽發(fā)證書只有一年的有效期。如果你選擇使用 Istio 的自簽發(fā)證書,就需要在它們過期之前訂好計(jì)劃進(jìn)行根證書的更迭。根證書過期可能會(huì)導(dǎo)致集群范圍內(nèi)的意外中斷。 ????????我們認(rèn)為每年更換根證書和密鑰是一個(gè)安全方面的最佳實(shí)踐

    2024年02月06日
    瀏覽(19)
  • Istio 實(shí)戰(zhàn)

    Istio 實(shí)戰(zhàn)

    Start Date: October 31, 2023 End Date: October 27, 2023 Tags: Life, Work Status: Not started 技術(shù)分享會(huì) Istio是一個(gè)用于服務(wù)治理的開放平臺(tái)。 Istio是一個(gè)Service Mesh形態(tài)的用于服務(wù)治理的開放平臺(tái)。 Istio是一個(gè)與Kubernetes緊密結(jié)合的適用于云原生場(chǎng)景的Service Mesh形態(tài)的用于服務(wù)治理的開放平臺(tái)。 只

    2024年02月06日
    瀏覽(13)
  • Istio數(shù)據(jù)面新模式:Ambient Mesh技術(shù)解析

    摘要: Ambient Mesh以一種更符合大規(guī)模落地要求的形態(tài)出現(xiàn),克服了大多數(shù)Sidecar模式的固有缺陷,讓用戶無需再感知網(wǎng)格相關(guān)組件,真正將網(wǎng)格下沉為基礎(chǔ)設(shè)施。 本文分享自華為云社區(qū)《華為云云原生團(tuán)隊(duì):Istio數(shù)據(jù)面新模式 Ambient Mesh技術(shù)解析》,作者: 云容器大未來。 如

    2024年02月03日
    瀏覽(16)
  • 在Mac本地搭建Kubernetes和Istio的詳細(xì)教程

    Kubernetes(簡(jiǎn)稱K8s)是一個(gè)開源的容器編排平臺(tái),用于自動(dòng)化部署、擴(kuò)展和管理容器化應(yīng)用程序。而Istio是一個(gè)服務(wù)網(wǎng)格,用于管理和連接微服務(wù),提供負(fù)載均衡、流量控制、故障恢復(fù)等功能。本文將詳細(xì)介紹如何在Mac本地使用kind(Kubernetes in Docker)工具搭建Kubernetes集群,并在

    2024年02月13日
    瀏覽(17)
  • 高階k8s二次開發(fā)教程 -- 通過閱讀Istio源碼習(xí)得

    本篇文章全網(wǎng)幾乎找不到,在做深層次的k8s二次開發(fā)時(shí)非常管用。那就是使用Client-go去訪問自定義CRD資源。 我們先使用kubebuilder生成一個(gè)CRD,論生成CRD這些,還是kubebuilder更加方便。 Kubernetes API(在本例中為Project和ProjectList)提供的每種類型都需要實(shí)現(xiàn)該k8s.io/apimachinery/pkg/r

    2024年02月15日
    瀏覽(21)
  • k8s webhook實(shí)例,java springboot程序?qū)崿F(xiàn) 對(duì)Pod創(chuàng)建請(qǐng)求添加邊車容器 ,模擬istio實(shí)現(xiàn)日志文件清理

    k8s webhook實(shí)例,java springboot程序?qū)崿F(xiàn) 對(duì)Pod創(chuàng)建請(qǐng)求添加邊車容器 ,模擬istio實(shí)現(xiàn)日志文件清理

    大綱 背景與原理 實(shí)現(xiàn)流程 開發(fā)部署my-docker-demo-sp-user服務(wù)模擬業(yè)務(wù)項(xiàng)目 開發(fā)部署my-sidecar服務(wù)模擬邊車程序 開發(fā)部署服務(wù)my-docker-demo-k8s-operator 提供webhook功能 創(chuàng)建MutatingWebhookConfiguration 動(dòng)態(tài)準(zhǔn)入配置 測(cè)試邊車注入效果 背景: 原理: 涉及項(xiàng)目 my-docker-demo-sp-user 模擬業(yè)務(wù)項(xiàng)目

    2024年02月15日
    瀏覽(23)
  • 【Istio實(shí)際操作篇】Istio入門10分鐘快速安裝

    【Istio實(shí)際操作篇】Istio入門10分鐘快速安裝

    上一篇講了什么是Istio的理論篇,這次我們就來實(shí)際操作。 想看上一篇理論篇的看這里(看完絕對(duì)有所收獲): [Istio是什么?] 還不知道你就out了,一文40分鐘快速理解_小葉的技術(shù)Logs的博客-CSDN博客 本文有兩個(gè)版本, 詳細(xì)版、簡(jiǎn)潔版 。 前者適合新手,后者適合老手**(方便大

    2024年02月09日
    瀏覽(49)
  • Istio入門體驗(yàn)系列——基于Istio的灰度發(fā)布實(shí)踐

    Istio入門體驗(yàn)系列——基于Istio的灰度發(fā)布實(shí)踐

    導(dǎo)言:灰度發(fā)布是指在項(xiàng)目迭代的過程中用平滑過渡的方式進(jìn)行發(fā)布?;叶劝l(fā)布可以保證整體系統(tǒng)的穩(wěn)定性,在初始發(fā)布的時(shí)候就可以發(fā)現(xiàn)、調(diào)整問題,以保證其影響度。作為Istio體驗(yàn)系列的第一站,本文基于Istio的流量治理機(jī)制,針對(duì)最簡(jiǎn)單的幾種業(yè)務(wù)場(chǎng)景進(jìn)行了實(shí)踐,為后

    2024年02月12日
    瀏覽(24)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包