【Solr启动原理】

article/2025/9/18 21:28:36

Solr集群启动,都做了哪些事情?做了很多事,over。 启动流程大致如下:

1. 启动入口:web.xml。Solr归根结底是个Web服务,必须部署到jetty或者tomcat容器上。

2. SolrRequestFilter过滤器的实现类是org.apache.solr.servlet.SolrDispatchFilter。

  <!-- Any path (name) registered in solrconfig.xml will be sent to that filter --><filter><filter-name>SolrRequestFilter</filter-name><filter-class>org.apache.solr.servlet.SolrDispatchFilter</filter-class><!--Exclude patterns is a list of directories that would be short circuited by the SolrDispatchFilter. It includes all Admin UI related static content.NOTE: It is NOT a pattern but only matches the start of the HTTP ServletPath.--><init-param><param-name>excludePatterns</param-name><param-value>/css/.+,/js/.+,/img/.+,/tpl/.+</param-value></init-param></filter><filter-mapping><!--NOTE: When using multicore, /admin JSP URLs with a core specifiedsuch as /solr/coreName/admin/stats.jsp get forwarded by aRequestDispatcher to /solr/admin/stats.jsp with the specified coreput into request scope keyed as "org.apache.solr.SolrCore".It is unnecessary, and potentially problematic, to have the SolrDispatchFilterconfigured to also filter on forwards.  Do not configurethis dispatcher as <dispatcher>FORWARD</dispatcher>.--><filter-name>SolrRequestFilter</filter-name><url-pattern>/*</url-pattern></filter-mapping>

3. [SolrDispatchFilter] 既然是个Filter,就要实现init(),doFilter()和destroy()三个方法。

package javax.servlet;import java.io.IOException;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;public interface Filter {void init(FilterConfig var1) throws ServletException;void doFilter(ServletRequest var1, ServletResponse var2, FilterChain var3) throws IOException, ServletException;void destroy();
}

4. [SolrDispatchFilter] 在init()方法中,初始化cores,并开始加载这些cores。请注意,load()方法是最重要的加载Solr Core的方法。

/*** Override this to change CoreContainer initialization* @return a CoreContainer to hold this server's cores*/protected CoreContainer createCoreContainer(Path solrHome, Properties extraProperties) {NodeConfig nodeConfig = loadNodeConfig(solrHome, extraProperties); // 从ZK读取solr.xml配置文件到nodeConfig中cores = new CoreContainer(nodeConfig, extraProperties, true);cores.load();return cores;}

5. [SolrDispatchFilter] 实际项目可以在createCoreContainer方法尝试加载Cores之后,另外启动一个线程去recovery failed cores,来再一次尝试加载load失败的Solr cores。

6. [CoreContainer] load()方法首先会通过initZooKeeper函数初始化ZK得到一个ZkController实例。

7. [CoreContainer] coresLocator.discover(this)是去遍历Solr Home目录下有哪些cores需要去加载的。

    // 这里真正开始加载Solr Cores// setup executor to load cores in parallelExecutorService coreLoadExecutor = ExecutorUtil.newMDCAwareFixedThreadPool(cfg.getCoreLoadThreadCount(isZooKeeperAware()),new DefaultSolrThreadFactory("coreLoadExecutor") );final List<Future<SolrCore>> futures = new ArrayList<>();try {// 遍历Solr Home,发现需要加载的coresList<CoreDescriptor> cds = coresLocator.discover(this);if (isZooKeeperAware()) {//sort the cores if it is in SolrCloud. In standalone node the order does not matterCoreSorter coreComparator = new CoreSorter().init(this);cds = new ArrayList<>(cds);//make a copyCollections.sort(cds, coreComparator::compare); // 对集合中的Core排序}checkForDuplicateCoreNames(cds);

hdfs是个single shard single replica的索引,发现在日志中会打印:
2018-12-10 09:52:16,581 | INFO | localhost-startStop-1 | Looking for core definitions underneath /srv/BigData/solr/solrserveradmin | org.apache.solr.core.CorePropertiesLocator.discover(CorePropertiesLocator.java:125)
2018-12-10 09:52:16,610 | INFO | localhost-startStop-1 | Found 1 core definitions | org.apache.solr.core.CorePropertiesLocator.discover(CorePropertiesLocator.java:158)

Solr实例中的状态如下:
host1:~ # ll /srv/BigData/solr/solrserveradmin/hdfs_shard1_replica1/
total 4
-rw------- 1 omm wheel 190 Dec 10 09:49 core.properties
host1:~ # cat /srv/BigData/solr/solrserveradmin/hdfs_shard1_replica1/core.properties
#Written by CorePropertiesLocator
#Mon Dec 10 09:49:09 CST 2018
numShards=1
collection.configName=confWithHDFS
name=hdfs_shard1_replica1
shard=shard1
collection=hdfs
coreNodeName=core_node1

8. [CoreContainer] 然后启动一个线程池去并行加载Cores(SolrCloud模式下是8个并发线程)

      // 开始加载Cores咯!按顺序遍历所有找到的Coresfor (final CoreDescriptor cd : cds) {if (cd.isTransient() || !cd.isLoadOnStartup()) {solrCores.putDynamicDescriptor(cd.getName(), cd);} else if (asyncSolrCoreLoad) {solrCores.markCoreAsLoading(cd);}if (cd.isLoadOnStartup()) {futures.add(coreLoadExecutor.submit(() -> {SolrCore core;try {if (zkSys.getZkController() != null) {zkSys.getZkController().throwErrorIfReplicaReplaced(cd);}// 根据coreDe去创建core,false表示暂时不往ZK注册。下面会有更详细的解析。core = create(cd, false);} finally {if (asyncSolrCoreLoad) {solrCores.markCoreAsNotLoading(cd);}}try {// 这边往ZK注册!!!真正往Cluster中加载shard,主要做如下三件事:// 1. Shard leader选举// 2. 复读TLog重放数据,恢复现场,保证数据一致性// 3. 完成数据的恢复(leader数据会迁移到副本),这个恢复动作是后台运行的zkSys.registerInZk(core, true);} catch (RuntimeException e) {SolrException.log(log, "Error registering SolrCore", e);}return core;}));}}// 结束加载Core咯!// Start the background threadbackgroundCloser = new CloserThread(this, solrCores, cfg);backgroundCloser.start();} finally {if (asyncSolrCoreLoad && futures != null) {coreContainerWorkExecutor.submit((Runnable) () -> {try {for (Future<SolrCore> future : futures) {try {future.get();} catch (InterruptedException e) {Thread.currentThread().interrupt();} catch (ExecutionException e) {log.error("Error waiting for SolrCore to be created", e);}}} finally {ExecutorUtil.shutdownAndAwaitTermination(coreLoadExecutor);}});} else {ExecutorUtil.shutdownAndAwaitTermination(coreLoadExecutor);}}if (isZooKeeperAware()) {zkSys.getZkController().checkOverseerDesignate();}

9. [CoreContainer] 下面详细对create方法和registerInZk方法做一个介绍。

先来看看create方法。
方法注解写着Creates a new core based on a CoreDescriptor,简洁明了。

  1. create里面的preRegister方法首先会将该core的状态设置为DOWN状态
  2. 因为传入的publishState是false,则该core暂时不会去往ZK注册(往ZK注册会涉及到Shard Leader选举)
  /*** Creates a new core based on a CoreDescriptor.** @param dcore        a core descriptor* @param publishState publish core state to the cluster if true** @return the newly created core*/private SolrCore create(CoreDescriptor dcore, boolean publishState) {if (isShutDown) {throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "Solr has been shutdown.");}SolrCore core = null;try {MDCLoggingContext.setCore(core);SolrIdentifierValidator.validateCoreName(dcore.getName());if (zkSys.getZkController() != null) {// 1. 向ZK的overseerJobQueue队列/overseer/queue发布该core的信息包括DOWN状态;// 2. 通知zkStateReader要watch该core所在collection的state.json的监控zkSys.getZkController().preRegister(dcore);}ConfigSet coreConfig = coreConfigService.getConfig(dcore);log.info("Creating SolrCore '{}' using configuration from {}", dcore.getName(), coreConfig.getName());core = new SolrCore(dcore, coreConfig);MDCLoggingContext.setCore(core);// always kick off recovery if we are in non-Cloud modeif (!isZooKeeperAware() && core.getUpdateHandler().getUpdateLog() != null) {core.getUpdateHandler().getUpdateLog().recoverFromLog();}// 注意create方法传入的参数是false,因此不会在下面的方法中去调用zkSys.registerInZk()往ZK注册。registerCore(dcore.getName(), core, publishState);return core;} catch (Exception e) {coreInitFailures.put(dcore.getName(), new CoreLoadFailure(dcore, e));log.error("Error creating core [{}]: {}", dcore.getName(), e.getMessage(), e);final SolrException solrException = new SolrException(ErrorCode.SERVER_ERROR, "Unable to create core [" + dcore.getName() + "]", e);if(core != null && !core.isClosed())IOUtils.closeQuietly(core);throw solrException;} catch (Throwable t) {SolrException e = new SolrException(ErrorCode.SERVER_ERROR, "JVM Error creating core [" + dcore.getName() + "]: " + t.getMessage(), t);log.error("Error creating core [{}]: {}", dcore.getName(), t.getMessage(), t);coreInitFailures.put(dcore.getName(), new CoreLoadFailure(dcore, e));if(core != null && !core.isClosed())IOUtils.closeQuietly(core);throw t;} finally {MDCLoggingContext.clear();}}

10. [CoreContainer->ZkContainer->ZkController] registerInZk方法,会调用zkController.register(core.getName(), core.getCoreDescriptor())会在后台新启一个线程去执行,不影响Solr启动。

具体主要做了如下三件事。具体主要做了如下三件事。

  1. 进行Shard Leader选举!!!最近刚看过选举机制,很简单,mzxid最小的就是leader。
  2. 接下来,会从ULog重放数据,恢复现场
  3. 判断是否需要恢复数据
  /**4. Register shard with ZooKeeper.5.  6. @return the shardId for the SolrCore*/public String register(String coreName, final CoreDescriptor desc) throws Exception {return register(coreName, desc, false, false);}/**6. Register shard with ZooKeeper.7.  9. @return the shardId for the SolrCore*/public String register(String coreName, final CoreDescriptor desc, boolean recoverReloadedCores, boolean afterExpiration) throws Exception {try (SolrCore core = cc.getCore(desc.getName())) {MDCLoggingContext.setCore(core);}try {// pre register has published our down statefinal String baseUrl = getBaseUrl();final CloudDescriptor cloudDesc = desc.getCloudDescriptor();final String collection = cloudDesc.getCollectionName();final String coreZkNodeName = desc.getCloudDescriptor().getCoreNodeName();assert coreZkNodeName != null : "we should have a coreNodeName by now";String shardId = cloudDesc.getShardId();Map<String,Object> props = new HashMap<>();// we only put a subset of props into the leader nodeprops.put(ZkStateReader.BASE_URL_PROP, baseUrl);props.put(ZkStateReader.CORE_NAME_PROP, coreName);props.put(ZkStateReader.NODE_NAME_PROP, getNodeName());if (log.isInfoEnabled()) {log.info("Register replica - core:" + coreName + " address:" + baseUrl + " collection:"+ cloudDesc.getCollectionName() + " shard:" + shardId);}ZkNodeProps leaderProps = new ZkNodeProps(props);// 1. 进行Shard Leader选举!!!最近刚看过选举机制,很简单,mzxid最小的就是leader。try {// If we're a preferred leader, insert ourselves at the head of the queueboolean joinAtHead = false;Replica replica = zkStateReader.getClusterState().getReplica(desc.getCloudDescriptor().getCollectionName(),coreZkNodeName);if (replica != null) {joinAtHead = replica.getBool(SliceMutator.PREFERRED_LEADER_PROP, false);}joinElection(desc, afterExpiration, joinAtHead);} catch (InterruptedException e) {// Restore the interrupted statusThread.currentThread().interrupt();throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR, "", e);} catch (KeeperException | IOException e) {throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR, "", e);}// in this case, we want to wait for the leader as long as the leader might// wait for a vote, at least - but also long enough that a large cluster has// time to get its act togetherString leaderUrl = getLeader(cloudDesc, leaderVoteWait + 600000);String ourUrl = ZkCoreNodeProps.getCoreUrl(baseUrl, coreName);log.info("We are " + ourUrl + " and leader is " + leaderUrl);boolean isLeader = leaderUrl.equals(ourUrl);try (SolrCore core = cc.getCore(desc.getName())) {CoreDescriptor cd = core.getCoreDescriptor();if (SharedFsReplicationUtil.isZkAwareAndSharedFsReplication(cd) && !isLeader) {// with shared fs replication we don't init the update log until now because we need to make it read only// if we don't become the leaderDelayedInitSolrCore.initIndexReaderFactory(core);core.getUpdateHandler().setupUlog(core, null);core.getSearcher(false, false, null, true);// the leader does this in ShardLeaderElectionContext#runLeaderProcess}// 2. 接下来,会从TLog重放数据,恢复现场// recover from local transaction log and wait for it to complete before// going active// TODO: should this be moved to another thread? To recoveryStrat?// TODO: should this actually be done earlier, before (or as part of)// leader election perhaps?UpdateLog ulog = core.getUpdateHandler().getUpdateLog();// we will call register again after zk expiration and on reload if (!afterExpiration && !core.isReloaded() && ulog != null && !SharedFsReplicationUtil.isZkAwareAndSharedFsReplication(cd)) {// disable recovery in case shard is in construction state (for shard splits)Slice slice = getClusterState().getSlice(collection, shardId);if (slice.getState() != Slice.State.CONSTRUCTION || !isLeader) {Future<UpdateLog.RecoveryInfo> recoveryFuture = core.getUpdateHandler().getUpdateLog().recoverFromLog();if (recoveryFuture != null) {log.info("Replaying tlog for " + ourUrl + " during startup... NOTE: This can take a while.");recoveryFuture.get(); // NOTE: this could potentially block for// minutes or more!// TODO: public as recovering in the mean time?// TODO: in the future we could do peersync in parallel with recoverFromLog} else {log.info("No LogReplay needed for core=" + core.getName() + " baseURL=" + baseUrl);}}}// 3. 是否需要恢复数据// a. 如果是leader,则不需要recovery,直接将Replica状态发布为Replica.State.ACTIVE;// b. 如果不是leader,则判断是否需要recovery。若需要recover,新启一个线程去从leader恢复数据到同一个数据版本,此时Replica状态变成了Replica.State.RECOVERING状态。// c. 在recover完成之后,再将Replica状态发布为Replica.State.ACTIVE状态。boolean didRecovery = checkRecovery(coreName, desc, recoverReloadedCores, isLeader, cloudDesc, collection,coreZkNodeName, shardId, leaderProps, core, cc, afterExpiration);if (!didRecovery) {// 将replica的状态变成ACTIVEpublish(desc, Replica.State.ACTIVE);}core.getCoreDescriptor().getCloudDescriptor().setHasRegistered(true);}// make sure we have an update cluster state right awayzkStateReader.forceUpdateCollection(collection);return shardId;} finally {MDCLoggingContext.clear();}}

11. [ZkController] 检查是否需要recovery过程

  /*** Returns whether or not a recovery was started*/private boolean checkRecovery(String coreName, final CoreDescriptor desc,boolean recoverReloadedCores, final boolean isLeader,final CloudDescriptor cloudDesc, final String collection,final String shardZkNodeName, String shardId, ZkNodeProps leaderProps,SolrCore core, CoreContainer cc, boolean afterExpiration) {if (SKIP_AUTO_RECOVERY) {log.warn("Skipping recovery according to sys prop solrcloud.skip.autorecovery");return false;}boolean doRecovery = true;// leaders don't recover, shared fs replication replicas don't recover CoreDescriptor cd = core.getCoreDescriptor();if (!isLeader && !SharedFsReplicationUtil.isZkAwareAndSharedFsReplication(cd)) {log.info("I am not the leader");if (!afterExpiration && core.isReloaded() && !recoverReloadedCores) {doRecovery = false;}if (doRecovery) {log.info("Core needs to recover:" + core.getName());// 这里会新启一个异步线程去recover,不会阻塞主线程。core.getUpdateHandler().getSolrCoreState().doRecovery(cc, core.getCoreDescriptor());        return true;}// see if the leader told us to recoverfinal Replica.State lirState = getLeaderInitiatedRecoveryState(collection, shardId,core.getCoreDescriptor().getCloudDescriptor().getCoreNodeName());if (lirState == Replica.State.DOWN) {log.info("Leader marked core " + core.getName() + " down; starting recovery process");core.getUpdateHandler().getSolrCoreState().doRecovery(cc, core.getCoreDescriptor());return true;}} else {log.info("I am the leader, no recovery necessary");}return false;}

12. [RecoveryStrategy] 启动recovery线程。在recovery完成之后,再将Replica状态发布为Replica.State.ACTIVE状态。

  @Overridepublic void run() {// set request info for loggingtry (SolrCore core = cc.getCore(coreName)) {if (core == null) {SolrException.log(LOG, "SolrCore not found - cannot recover:" + coreName);return;}MDCLoggingContext.setCore(core);LOG.info("Starting recovery process. recoveringAfterStartup=" + recoveringAfterStartup);try {// !!!这边开始去做Recovery!!!doRecovery(core);} catch (InterruptedException e) {Thread.currentThread().interrupt();SolrException.log(LOG, "", e);throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR, "", e);} catch (Exception e) {LOG.error("", e);throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR, "", e);}} finally {MDCLoggingContext.clear();}}

13. [RecoveryStrategy] Recovery流程介绍。

Recovery分为PeerSync和Replication两种方式。Recovery流程先尝试做PeerSync Recovery,失败的话再尝试Replication Recovery。

  • PeerSync:如果中断的时间较短,recovering node只是丢失少量update请求,那么它可以从leader的update log中获取。这个临界值是100个update请求,如果大于100,就会从leader进行完整的索引快照恢复。
    PeerSync Recovery
  • Replication:如果节点下线太久以至于不能从leader那进行同步,或者如果PeerSync失败(参考这里),它就会使用Solr的基于http进行索引的快照恢复。
    Replication Recovery

Reference

https://blog.csdn.net/weixin_42257250/article/details/89512282
https://www.cnblogs.com/rcfeng/p/4145349.html


http://chatgpt.dhexx.cn/article/socghIWD.shtml

相关文章

Solr的工作原理以及如何管理索引库

1. Solr的简介 ​ Solr是一个独立的企业级搜索应用服务器&#xff0c;它对外提供类似于Web-service的API接口。用户可以通过http请求&#xff0c;向搜索引擎服务器提交一定格式的XML文件&#xff0c;生成索引&#xff1b;也可以通过Http Get操作提出查找请求&#xff0c;并得到…

solr底层原理

一、总论 根据http://lucene.apache.org/java/docs/index.html定义&#xff1a; Lucene是一个高效的&#xff0c;基于Java的全文检索库。 所以在了解Lucene之前要费一番工夫了解一下全文检索。 那么什么叫做全文检索呢&#xff1f;这要从我们生活中的数据说起。 我们生活中…

全文搜索引擎Solr原理和实战教程

Solr简介 1.Solr是什么? Solr它是一种开放源码的、基于 Lucene Java 的搜索服务器,易于加入到 Web 应用程序中。Solr 提供了层面搜索(就是统计)、命中醒目显示并且支持多种输出格式(包括XML/XSLT 和JSON等格式)。Solr是一个高性能,采用Java开发, 基于Lucene的全文搜索服务…

solr全文检索实现原理

solr那是我1年前使用到的一个搜索引擎&#xff0c;由于当初对于配置了相应了&#xff0c;但是今天突然面试问到了&#xff0c;哎&#xff0c;太久了&#xff0c;真的忘记了&#xff0c;今天特地写一篇博客记下来 solr是一个独立的企业级搜索应用服务器&#xff0c;它对外t提供…

Solr工作原理

Solr简介 Solr是一个独立的企业级搜索应用服务器&#xff0c;它对外提供类似于Web-service的API接口。用户可以通过http请求&#xff0c;向搜索引擎服务器提交一定格式的XML文件&#xff0c;生成索引&#xff1b;也可以通过Http Get操作提出查找请求&#xff0c;并得到XML格式…

Solr的原理及使用

1.Solr的简介 Solr是一个独立的企业级搜索应用服务器&#xff0c;它对外提供类似于Web-service的API接口。用户可以通过http请求&#xff0c;向搜索引擎服务器提交一定格式的XML文件&#xff0c;生成索引&#xff1b;也可以通过Http Get操作提出查找请求&#xff0c;并得到XML格…

Solr原理剖析

一、简介 Solr是一个高性能、基于Lucene的全文检索服务器。Solr对Lucene进行了扩展&#xff0c;提供了比Lucene更为丰富的查询语言&#xff0c;并实现了强大的全文检索功能、高亮显示、动态集群&#xff0c;具有高度的可扩展性。同时从Solr 4.0版本开始&#xff0c;支持SolrCl…

solr的基本原理

solr介绍&#xff1a; solr是一个全局检索引擎&#xff0c;能够快速地从大量的文本数据中选出你所需要的数据&#xff0c;而你只需要提供相应的关键词进行检索。solr的高效率查询靠的是底层强大的索引库&#xff0c;所以solr最关键的技术也是其底层的索引设计。solr工作的时候可…

Solr的工作原理(最直白的解释,简单易懂)懂?

Solr 什么是Solr Solr是一个开源搜索平台&#xff0c;用于构建搜索应用程序。 它建立在Lucene(全文搜索引擎)之上。 Solr是企业级的&#xff0c;快速的和高度可扩展的。 使用Solr构建的应用程序非常复杂&#xff0c;可提供高性能。 为了在CNET网络的公司网站上添加搜索功能&…

Solr(一) Solr 简介及搜索原理

一、 Solr 简介 1 为什么使用 Solr 在海量数据下&#xff0c;对 MySQL 或 Oracle 进行模糊查询或条件查询的效率是很低的。而搜索功能在绝大多数项目中都是必须的&#xff0c;如何提升搜索效率是很多互联网项目必须要考虑的问题。 既然使用关系型数据库进行搜索效率比较低&a…

UML入门以及Plant UML工具介绍

简介 UML&#xff0c;Unified Modeling Language&#xff0c;可视化的统一建模语言&#xff0c;是一种开放的方法&#xff0c;用于说明、可视化、构建和编写一个正在开发的、面向对象的、软件密集系统的制品的开放方法。而非程序设计语言&#xff0c;支持从需求分析开始的软件…

UML工具 Astah Professional8.0下载

UML工具 Astah Professional8.0下载 开头功能特性使用方法 文件下载链接 开头 由于Astah目前社区版被取消了&#xff0c;在这提供Professional 8.0版本。 Astah官网&#xff1a;https://astah.net/ 功能特性 1、在一个工具中做所有事情 不要为每个工作阶段切换工具。 做UML设…

免费 UML 工具

选取了四款UML工具: astah 经常看到网上的黄色背景就是这个软件画的,最后一个免费的社区版本是:astah community 7.2 安装包大小50M 以下三个均为免费版本: Software Ideas Modeler 可以画序列图,安装包很小,只有十几兆,而且提供便携版下载 Modelio 这是一个大型的…

十二个开源UML工具

本文将为您介绍12个优秀的UML工具&#xff1a; 1. StarUML StarUML(简称SU)&#xff0c;是一种创建UML类图&#xff0c;是一种生成类图和其他类型的统一建模语言(UML)图表的工具。StarUML是一个开源项目之一发展快、灵活、可扩展性强(zj)。 2. Netbeans UML Plugin 目前支持&…

UML工具Visual Paradigm入门:业务流程建模 (BPM) 教程

Visual Paradigm是包含设计共享、线框图和数据库设计新特性的企业项目设计工具。现在你只需要这样单独的一款模型软件 Visual Paradigm就可以完成用UML设计软件&#xff0c;用BPMN去执行业务流程分析&#xff0c;用ERD企业设计数据库的任务。Visual Paradigm现已加入在线订购&a…

UML图及UML工具使用技巧

转自&#xff1a;UML图及UML工具使用技巧 Rational Rose 2003 之“Rational License key error”问题的解决方案 大家对UML这个可视化的建模语言应该不在陌生了。五种关系、九种图是UML的核心组成元素&#xff0c;而Rational Rose 是实现这些关系、图的重要工具。工具的重要性…

推荐Ubuntu使用UML工具-Drawio

最近在找一个免费的&#xff0c;漂亮的又能在ubuntu上使用的uml工具 先上一张图 网上搜索可以使用命令安装&#xff0c;个人没使用过 sudo snap install drawiosnap官网介绍&#xff1a;https://snapcraft.io/drawio 个人推荐直接在github直接下载最新版本的安装包 drawio的…

小瞥linux下UML工具

原文地址&#xff1a;https://blog.csdn.net/wangdingqiaoit/article/details/11991459 学习设计模式时&#xff0c;希望能好好练习类图&#xff0c;因此需要UML工具&#xff0c;linux下有很多uml工具&#xff0c;这里小瞥一眼&#xff0c;做个了解&#xff0c;并不打算并不全…

c++源码逆向UML工具踩坑

最近考虑走读一些源码&#xff0c;需要对源码类图结构关系首先有个大概了解&#xff0c;否则实在啃不下去&#xff0c;研究了几款逆向工具 个人MAC机&#xff0c;CSDN明确有几款&#xff0c;包括EA, starUML&#xff0c;Rational Rose &#xff0c;Visual Paradigm 试了下Cr…

UML工具(1)-Umbrello

1、UML是统一建模语言&#xff0c;是一种可视化的语言。 2、各类工具对比。 工具名称 优点 缺点 Rational Rose 功能全&#xff0c;可以正向和逆向工程 收费&#xff0…